Am 12.11.2013 11:04, schrieb carlo von lynX:
On Sat, Nov 02, 2013 at 01:01:36PM +0100, [email protected] wrote:
Let me take my "standard example" regarding trust.  Men always trust each other 
*with respect to some topic*.  One can rarely rank peers by some level of trust without 
such a reference.

For instance: concerning my income: there is little reason not to trust my 
employer with that information, after all it's the one sending the money.  Also 
the bank will have an easy time to figure it out.  Health data is a different 
beast: my doctor and the pharmacy will know about as good or even better about 
my personal health state than myself.  Though it's neither my employers 
business nor that of my bank.

Eventually that's how people organize their life wrt. information.  By sharing 
some info you always risk it being no secret any longer.  In case of defect of 
your trusted party.  So you have to deal and heal the situation.  As long as 
this is about a certain topic only, as long as the leak is within bounds, this 
might be embarrassing.  But one can deal with it.  If it was about _all_ info 
at once, it typically becomes a real problem.
Ok

So organizing your social contacts into groups is inevitable.

At least it's IMHO typical.

So what makes human rights special?  They are inalienable.

Now how would we build a rights management system, which distinguishes between 
those inalienable rights and those which one can trade?  After all it's going 
to be binary encoded.  And it should be deadly easy to verify/audit.

Hence we build a system taking this inalienability as the basic axiom.  
(Frankly, in the beginning not even I confident that it would be possible to 
successfully build a usable system based on such a rigid rights management 
system.  But it turned out to work well.)

The idea was to map the situation to set theory: you start with a set and all you can 
trade away is a strict subset.  Problem solved.  The full set you can never even 
accidentally trade away.  That's what we map to "inalienable" rights.

(((Note here: so far this is all the "math&theory" in practice computers are to 
be used.  We could ignore our great rights management, switch them off and manipulate the 
bits at the hard disk.  That's why Askemos combines the principled solution with the other 
one: have independent witnesses for all your data and updates.  That way no single broken 
machine can be used to break the system in effect despite single machines may still be 
broken.)))
Have to think about this (including the bits I cut out)...
what "rights" are you actually modeling in Askemos?

Not too much different that with any other permission management system. Actually I don't fully understand your question here. As far as modelling is converned it's pretty much equivalent to Gödel numbers: anything get a number. The interpretation is up to the context.

The problem I see with forbidding the user to send his financial data
to a music forum is that the software wouldn't know what it is doing.
If the user chooses to torture himself, even we can't help.

To some extend, yes. But at least one could set things up smarter than the typical computer environment of these days.

So posting a character sequence detailing financial info into the comment section of the forum is really some stupidity for which there is no technical solution.

But if, say, your accounting software can be tricked into mailing some info to whatever address then it's pretty easy to disable this without disabling the software's ability to mail the info to your accountant. (Thinking along the lines of http://www.cap-lore.com/CapTheory/ here.)

However: those capability theories start out with some pre-defined capability distribution. That's our contribution about: instead of some pre-trusted - and therefore possibly already corruptible or corrupted - distribution, we start from a obviously safe distribution and give a rule to derive distributions which are safe if the original was safe. Proof by complete induction.

OK; forget about this detail.  It comes up rather late in the whole reasoning, when we 
start to look into the situation how public knowledge and such cultural heritage is to be 
treated.  After all: if we allow "ownership" on information, we still want/need 
some knowledge to be public.  In fact we need such knowledge to be able to communicate at 
all.
Oink II?

Hm. So you've been asking. No promises that this is well worded: in fact it's not going to be: When one sets up an Askemos node, one starts from an empty system. That is: as the first idea how you would want to do it. But this is like installing Linux without having anything: no binaries, no sources, nothing. This just does not work. It's the same with the human brain: you started out empty. Your parents installed your basic knowledge. But if they had had no language to start with, you would be unable to talk today. That's the point: to begin communicating you need some knowledge to be there already. If you want to assign "ownership" to this body of knowledge you will end up re-inventing the concept of public knowledge a.k.a cultural heritage.

(Alternatively you might want to declare all this knowledge your private property to begin with. If you where able to actually enforce people to accept your proprietary rights (all) you would be able to do is suppress further communication. Including isolation of yourself. It's pretty much the same with patent law: if you where able to patent basic concepts - don't get me started that some people always try to get that done - then there is no software at all anymore possible. Imagine the concept of a conditional branch being no longer avail...)


the trust levels serve the purpose to recreate a facebook-like user
experience. if you want to use psyc in a more high security fashion
you can use it differently.
That's precisely where I don't buy into the idea that this can
be done using a single number.
Facebook does it with a boolean.
Or so.

No?
Probably yes.  I'll have to take your word for it.  ;-)

I assume facebook has a great solution there.  -  Could you explain to the 
stupid mine which problem it happens solve?  ;-)
I'm saying that our goal is to provide people with reasons to get their
assets off of Faceboogle.

OK, I don't have any assets on facebook. I have the stuff I want to keep (published or private) in Askemos.

In theory this could work for quite a lot of people. All they would need was to have a few friend/relatives whom they would trust with some kind of data. And, well, they'd need to run a node, because the "virtual server" is made from redundant nodes.

(I'm kinda foreseeing nodes being run on mobile phones; but we're not yet there.)

  I don't believe sharing your party pictures with
your employer is the #1 usage problem we have to deal with - users are
slowly learning to be aware of that and it is actually a tech problem of
Facebook that it cannot separate the employers from the party people.
With the PSYC channel logic separation in secushare is easy.

Hm, wasn't this the point where I was not sure in the first place? I don't see that a floating point number is enough to assure a strict enough permission handling. I do require more (see above). And I'd recommend everybody to go back to the "human rights" test to verify that their permission management system will not betray them.

(Let alone that I'd furthermore require byzantine fault tolerance for the processes. That's why I do so far believe that PSYC is not as powerful as I'd need it to be for use in BALL. But that's still an open question to me. I could just skip over the perm handling of PSYC and do what we have atop. I'm less sure that I can get to a byzantine fault tolerant system as long as PSYC forces one master node to be the SPOF.)

The problem with Faceboogle is how everything is accessible for Mallory -
and that is currently what we are trying to solve.

That's what we solved.  Outside of FB.

  We can look into
perfect trust modeling, too - but given the current threat it's an
academic research topic.


??? }:-/  Don't get that.  BALL is real.  It's self hosting.

And: there is nothing as practical as a sound theory!

Each ad hoc system is going to be another failure sooner or later. And those will confuse the people and contribute to their uninformed but general distrust. I'm really opposed to discount trust models as "academic research topic". Especially not when we are talking about a proofed criterion.

The only thing which needs to be solved is an implementation detail: we currently depend on X509 for auth.
This should be replaced with web of trust (GPG).
The library does actually support this already. All we'd need to do is: find the resources to acutally implement it's use and test it.




So let me suggest: no matter what you don't like about this "BALL" and what you'd like to do better, PLEASE try to follow the principle of inalienable privileges. Don't run into that trap again an so many did before. Or else: if you want to bring another permission control system, please accompany it with a similar proof.


Best Regards

/Jörg

-- [email protected]
   https://lists.tgbit.net/mailman/listinfo.cgi/secu-share

Reply via email to