Re: Introduction.

2012-03-30 Thread  
--- On Sat, 2012/3/31, Daniel E. Wilson d...@bureau-13.org wrote:

 Hi, my name is Dan and I have submitted the version 5.2.1 of the racket 
 scheme interpreter for review.
 
 I use Lisp type languages for my own projects and needed a more recent 
 version than the plt-scheme package for my Fedora box.  When I found out that 
 plt-scheme was orphaned that I would become a package maintainer since I can 
 not possibly be the only person that needs this software.
 

Hi Dan,

I've never used Racket before personally, but you may also be interested to 
know that Guile 2.0 is out (Fedora currently ships 2.0.3, and I think moving to 
2.0.5 soon). The more solid Scheme environments the better, IMO. :-) Have fun.

-IY
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: P2P Packaging/Koji Cloud

2011-12-07 Thread  
On 12/08/2011 05:12 AM, seth vidal wrote:
 Bandwidth is the big concern for the end user here and then the other
 issue is  - is all of this worth it for building pkgs? I don't think it
 is, personally, pkg building is not that huge of a hit, afaict to
 getting things done.

 I mean the sum total of the steps were talking about, even now is more
 or less:

 1. init host
 2. stuff some files onto it
 3. start up a process
 4. communicate to that process
 5. build pkg
 6. stuff pkgs into a local repo
 7. go to 5 until no more pkgs
 8. download all pkgs back to original client
 9. destroy host.

 you can do it now if you're willing to do steps 5-8 manually.

I think you are correct in essentially asking if this is a solution in 
search of a problem.

But on the technical side...
I don't think bandwidth would be much of an issue, for one thing it 
could be throttleable and with a huge number of available systems this 
still results in an overabundance of available systems for build. And 
anyway most Fedora users have no problem sucking down a DVD-sized spin 
each time they want to try something other than the default desktop.

If this were implemented the entire cycle should be per package or 
perhaps per some limit of packages that is user-configurable (like do 
blocks of 10 rpms or do blocks of packages  500Mb   1Gb). With 
parceled building like that the iterations are smaller but more 
frequent, and I think less of a burden for someone who just wants to try 
things out.

And on the practical side...
This all goes back to the first sentence in this response. While it is 
very possible to do something like this, and the idea is exciting 
because it is something new, I've never heard of anyone kicking and 
screaming about a wait que bottleneck or insufficient resources in the 
Fedora infrastructure. I've also not heard anything like RH is going to 
drop Fedora so we need to look for a new home, either. I was simply 
addressing the how to make it secure element. This is a workable 
method which relies 100% on volunteers (i.e. community resource use, not 
a paid solution) VS going to some overwrought cloud solution for which 
there really isn't any backstop for integrity checking compared to the 
distributed build crosscheck I've described above.
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: P2P Packaging/Koji Cloud

2011-12-07 Thread  
An idea just struck me that may work.

If the system is made light enough that it is utterly painless for 
anyone to contribute processing time then cross-checking of hashes could 
be made statistically secure, save for a widespread compromise of the 
entire Fedora userbase.

For example, if I just yum install skyline[1] and then set chkconfig 
skyline on or whatever the systemd equivalent is (sorry, haven't kept 
up lately) and my system just started pulling packages to build/rebuild 
constantly in the background with a priority level low enough I'd never 
notice it, then overnight Fedora as a project would have FAR more build 
time than it needs.

So how is this secure?

Each time a build is made, the building system makes a hash of the set 
of RPMs in $wherever/mock/result/{foo,bar} and sends the completed data 
back to the Fedora build system.

Now the Fedora build system checks the hash to make sure it is correct. 
Of course it is, because we've only got one sample.

But then we collect the other builds of the same RPM from, say, 10 other 
random systems and compare their hashes to what was received from the 
initial build system.

A has that doesn't match whatever arises as the common hash is either an 
error (likely considering the amount of transmission involved) or 
poisoned. These will stick out like a sore thumb amongst the forest of 
correct builds.

So how would I compromise this system? I would have to poison every 
single SRPM departing the Fedora build infrastructure. Hard, maybe, but 
possible.

How do we prevent that? Use SSL to transfer the packages in the first 
place, and now that avenue is not available in the time allotted for the 
attack to proceed.

This system depends chiefly on one thing: Having a boatload of 
contributors. I think we could easily expect 1,000+ active contributors, 
particularly if we make this a happy competition complete with a stats 
tracker the way the BOINC and SETI@Home trackers work. People who send 
more than 1 bad hash could be notified by their system that things 
aren't working out which could be an early warning in identifying 
whether the system is indeed being attacked (or the individual's system 
has been compromised). A throttle based on such indicators could let us 
know to halt the distributed build and switch back to old koji.

Just some ideas.

1. Sorry, silly package name idea from the image of city construction 
koji + cloud
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: Trusted Boot in Fedora

2011-07-03 Thread
On Wed, 2011-06-29 at 13:48 +0200, Björn Persson wrote:
 Miloslav Trmač wrote:
  First, the TPM (nor the CPU) really can't tell the difference between
  the owner of the computer and an author of a virus.
 
 A jumper on the motherboard, or some other kind of physical circuit breaker, 
 can do that. It would have been possible to design the TPM to accept a new 
 master key only when a certain circuit is closed.

It would have been possible, but remember the purpose and history of
Trusted Computing (of which this is a fundamental part) before it hit
the commercial scene. Originally this was conceived as a way for
government workers of various types to be able to use secure computing
systems even *after* an unattended period. The whole concept is based on
finding a way to circumvent the first law of information security: If
the attacker has physical access you don't have security. If a
circumvention jumper were designed into the system this would defeat the
purpose.

Today we are having this discussion in the commercial and private space
only because it is a technology the government already understands and
would therefore feel confident in designing anti-circumvention
legislation around to suit the needs of the pro-DRM folks. It has the
added benefit that a red herring security for everyone argument can be
made to support the concept of including DRM enablers into all digital
devices in the commercial space. Of course, the TPM piece being an
Intel-only standard and the software behind it being a black-box set of
processes undercuts the non-DRM commercial hype at the root. This being
naturally of benefit to Intel far more than it is of benefit to anyone
interested in actually knowing what their system is up to (one phrase
for that is information security) is easy to overlook.

The idea that government interest is still driving this is a bit shallow
-- there are already functionally identical systems which have been
fielded (and the customer in this case, who really is concerned with
complete security, does not have the handicap of being made to trust any
black-box processes at any level, anywhere) and I've already attempted
to place this discussion in perspective elsewhere. In short, this is a
step toward DRM of a sort nobody can quite fathom yet. Ultimately it
will prove to be scary to the point that I seriously feel it will be
dropped in the commercial space and media providers (and Microsoft) will
simply have to evolve or get eaten by whoever else does first.

-Iwao

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: Trusted Boot in Fedora

2011-06-24 Thread
On Fri, 2011-06-24 at 11:11 +0200, Till Maas wrote:
 On Fri, Jun 24, 2011 at 10:01:45AM +0100, Camilo Mesias wrote:
  I am still struggling to see real applications for this. I don't know
  how a networked system using the technology could be differentiated
  from an (insecure) software simulation of the same from a remote
  viewer's perspective. Also I don't see how it would be used in the
 
 Afaik it would allow to securely enter hard disk encryption passwords
 via network on a Fedora system, because one can ensure that the correct
 (untampered) initrd / kernel is loaded.
 You cannot simulate this afaik because the used cryptographic keys are
 only stored in the TPM module and cannot be accessed from the outside.
 Therefore one needs to tamper with the TPM module instead of only with
 the unencrypted /boot partition, which is a lot harder from my point of
 view.
 
And as time passes and weaknesses are exposed in the encryption scheme
hard-wired into the TPM component, what do we do then other than buy new
hardware in a panic? (Assuming this becomes a technology we all come to
depend on in some way and doesn't just sort of die off in the commercial
space as I expect it will.)

There is nothing preventing smart people from being smart and this is
why hard-wired crypto solutions are always of both extremely short
usefulness (you have to buy a new $device to either change compromised
keys or upgrade to higher security) and under enhanced threat due to
their value as slow-moving security targets for attackers. The best
middle-ground solution I've seen is to involve a hardware device such as
an IC Card/SmartCard/dongle that is easily expendable/removable/cheap in
the solution so the major components do not themselves become
expendable.

This is the direction the government and military are coming from --
viewing crypto components as expendable -- because they are always
subject to attack. Either the TPM and stored hashes are removable or the
entire computing system has an extremely short lifecycle duration. They
are interested in the technology, but the flavor of their interest is
different than the commercial DRM vendor space -- and I don't see any
other driving interest in the commercial space than this. The commercial
space has a significantly different take on things and also an
overwhelming underestimation of how effective the wild unwashed masses
are at producing circumvention to such technologies when given
sufficient reason (and anything is a good reason to some people).

But we already have SmartCard, dongle, etc. solutions and their
usefulness extends to where they are used today. How is TPM any
different other than it is inextricably tied to the rest of my computer
and now my computer can be regulated? Simply guaranteeing that a certain
kernel was booted guarantees nothing -- a proper kernel can still be the
platform for sinister activity. And anyway, hashing and verifying the
hash of the kernel can be done in other, removable (and device
independent) ways than hardwiring the solution into the computer.

If I want to use the same computer for 5 years, but someone either
cracks the algorythm behind the encryption used or finds a repository of
generated keys (or even just a slight weakness in the randomness of
generated keys, thus massively reducing the set of actual vs theoretical
keys) what am I to do? I like netflix and want to keep watching, but the
chipset I have is no longer acceptable under their EUA, so I have to buy
new hardware that I don't want or otherwise need. Currently this happens
with forced Windows upgrades and we all rail against that. Now it can
happen on a different level because we are introducing a new layer of
hardware requirements and one that can be as strictly enforced as it
can be arbitrary.

Those are my concrete concerns and I don't see how hardwiring what is
essentially a mathematical solution to a problem is the right direction
from a technical standpoint in the consumer space. In fact, historically
speaking this is a direct step back away from fully programmable
information processing systems, because we are hardwiring security
components into the system now. This sounds like a 1950's solution in
need of a 2010's problem.

The dream (or rather the public sales pitch) is that with TPM we can
leave laptops unattended for extended periods in hotel rooms and not be
subject to evil maid attacks because the system will verify itself in a
way that can't be overwritten by the maid. But this is silly. If you
lose control of the device what is to prevent said evil maid from simply
swapping your processor or tampering in other ways with the hardware
(after all, the tboot protocol is already described as skipping the
check if a non-TXT enabled device is present)?

It didn't take long for iPhone hackers to find nifty solutions to their
perceived problems, I can't imagine professional security crackers will
not come up with similar solutions in a jiffy. We will never escape the

Re: Trusted Boot in Fedora

2011-06-24 Thread
On Fri, 2011-06-24 at 11:41 +0200, Miloslav Trmač wrote:
 2011/6/24 Tomas Mraz tm...@redhat.com:
  Yes, I completely agree. What Gregory tries to emphasis here - as I
  understand it, of course he might have a different intention - is purely
  politics and I do not think, that Fedora should involve in political
  decisions in one way or another.
 
 Frankly, I view the DRM issue as somewhat of a red herring in this
 discussion.  I can't see any reasonable way to set up a TPM-based DRM
 scheme for general-purpose computers: where does the trust come from?
 If nothing else, there must be thousands of common computer
 models/configurations; if a client connects to a music shop for the
 first time, how can the music shop tell the difference between an
 unmodified computer and a computer modified to record the music files?
 
 A company's IT department can install the computer from scratch by a
 trusted employee, measure the system, record the results, and use
 that as a baseline for the future use of the TPM within for
 attestation that company.
 
 A maker of an entertainment console can do something similar before it
 ships the device to customers.
 
 But for a general-purpose computer designed by a third party, I really
 can't see the trust mechanism.
Mirek

Perhaps you just answered your own question in reverse.

Have you considered that the real goal could easily be to exclude
third-parties?

-Iwao


-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: GNOME3 and au revoir WAS: systemd: please stop trying to take over the world :)

2011-06-17 Thread
On Fri, 2011-06-17 at 15:50 +0530, Rahul Sundaram wrote:
 On 06/17/2011 03:50 PM, Henrik Wejdmark wrote:
  On my desktop it's not on one page, it's a mile long listing so you get no
  overview at all. In Gnome2 at least all the apps are categorized. If the
  graphical user interface _requires_ you to use the keyboard to type the
  command it has failed it's task and we might as well go back to bash.
 
 GNOME 3 menu has categories in the right as well but in any case,  the
 common apps are in the dash and using a keyboard with a search as you
 type interface isn't the same as using bash.  Let us not be dramatic. 
 
 Rahul

Considering the frequent calls of Gnome 3 has failed at its task or
the GUI has failed if the user must  makes me wonder: Where is the
task definition or specification against which the implementation has
failed?

Doesn't live up to my expectation is very different from Doesn't
comply with spec and both are different from Is a bad design.

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel


Re: GNOME3 and au revoir WAS: systemd: please stop trying to take over the world :)

2011-06-17 Thread
On Fri, 2011-06-17 at 10:04 -0400, Bernd Stramm wrote:
 On Fri, 17 Jun 2011 19:33:18 +0900
 夜神 岩男 supergiantpot...@yahoo.co.jp wrote:
 
 
  Considering the frequent calls of Gnome 3 has failed at its task or
  the GUI has failed if the user must  makes me wonder: Where is
  the task definition or specification against which the implementation
  has failed?
  
  Doesn't live up to my expectation is very different from Doesn't
  comply with spec and both are different from Is a bad design.
 
 How about a spec then of what Gnome3 was trying to achiece, and how
 about those who like it telling us how Gnome3 achieved those things?

And this is precisely my point. At the moment criticism and defense both
seem a bit aimless because we aren't seeing any references to the
interface research someone said happened, interface specifications or
even a concept discussion/summary about what gnome-shell was supposed to
achieve. It was a serious undertaking, so I'm certain they had goals
which were at least clear to someone at some point.

So far I haven't been able to locate whatever dialogue was had withing
the GNOME dev team about the new interface design; I've looked, just
obviously not hard enough or in the wrong places. I'll find it
eventually when I have time, this issue will someday deeply affect my
customers, so this is important to me.

As far as smoothly integrated introductory first-run interface tutorials
or whatever, I strongly suspect that the angst had to this point over
the limited discoverability problems some perceive will prompt a
pleasant adjustment in the nearish future -- but I've been wrong about
these things before.

-Iwao

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Re: GNOME3 and au revoir WAS: systemd: please stop trying to take over the world :)

2011-06-17 Thread

 Perhaps the Gnome3 way of thinking is that calling up an additional
 application constitutes starting a new task in the work flow, so that
 the big interruption happens anyway. I don't think that is a good
 assumption for the design of a DE.

This is the sort of criticism that grants a clear place from which to
start rethinking the problem.

-Iwao

PS: And to everyone who thinks list discussions must necessarily descend
into flaming madness after a certain critical mass of messaging has been
reached: Would you really have things any other way?

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel