There is nothing hard about it, assuming you are using a decent language.
Just add a CryptoT type that wraps integers and booleans and that doesn't
allow any non-constant time operations nor implicit conversion to anything that
is not CryptoT (which of course means you can't index memory or do
Nice, but why isn't the LLVM optimizer removing the move?
Is it lack of proper alias analysis?
Sounds like that is a separate issue worth pursuing.
The remaining, more difficult, issue is initialization of
aggregate data structures via constructor functions, which still
involves a bunch of
Something like this will work, yes. It'll probably look more like:
Box::new(*x)
This will be described in some of the RFCs that are coming up soon.
Awesome!
We should really get rid of the ~T syntax in favor of FooT (where Foo = Box,
Own, Heap, etc.), since it is deceptively simple
At the moment, Rust is completely broken in this regard. The following
expression evaluates to None:
Some(~())
Ouch, this is a disaster.
Is there a bug filed for this?
Anyway, I don't get your argument about size to free having anything to do with
fixing it (although I agree that size to
I think the best solution is to add uN and sN types where N is not a power of
two, which LLVM should already support.
Then you can write your match like this:
match (val 6) as u2
{
...
}
And it will work as desired.
Biggest issue is that to make it work nicely you'd need to add some way to
However, the extensibility of trait objects comes at the cost of fat
pointers, which can be a problem if you have a lot of pointers.
This is fixable without introducing virtual functions, by adding a way to
express Struct and vtable for impl Trait for Struct and thin pointer to
Struct and
I see a proposal to add virtual struct and virtual fn in the workweek
meeting notes, which appears to add an exact copy of Java's OO system to Rust.
I think however that this should be carefully considered, and preferably not
added at all (or failing that, feature gated and discouraged).
The
it is guaranteed to happen on all readers
I meant all finite readers, such as those for normal disk files.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev
I don't think so, because the fact that the particular instance of T implements
the Deref trait cannot have any effect on the decorator code, since it's not in
the bounds for T.
What instead would work is to change the language so that if type Type
implements Trait and all Trait methods take
Stack management for green tasks has been based in the past first on segmented
stacks and then on standard large stacks.
However, I just realized that there is a third alternative which might well be
better than both of those.
The idea is very simple: a green task would run on a large stack
I assume this is incompatible with work stealing and task migration
between threads?
It is compatible, assuming that the number of large stacks is sufficiently
larger than the number of threads.
Basically, each green task can only run on a specific large stack, but as long
as you aren't
Interesting: my proposal appears to be indeed a generalization of the greenlet
approach.
Specifically, while the greenlet proposal seems to only use one large stack per
native thread, I'm suggesting to use multiple large stacks that can be stolen
by other threads, which does mitigate the
The ratio of native threads to stacks and of stacks to tasks can actually be
used to characterize all systems discussed.
(stacks/thread, tasks/stacks)
(1, 1) = current Rust native tasks
(1, N) = Python greenlets
(N, 1) = current Rust green tasks
(N, M) = proposal in my original mail
This can be easily implemented in Rust as a struct doing exactly that.
There's no need to modify the I/O layer, since you'd simply borrow an [u8]
from the type and pass it, resulting in the I/O layer directly writing into the
locked zeroed-on-destruction memory.
As for crypto, it seems the
At any rate, note that what you are trying to do only provides some mitigation
and is far from a complete solution, because in practice you can't prevent
leakage of all confidential data in this way (what about hibernation while the
key is in memory? what about plaintext decrypted with the
Some languages support a special do notation that allows to express monadic
operations more naturally.
However, there is an even more powerful option, that I'd call in notation (I
came up with it, but it's obvious, so I'm sure there's some language that has
something like it).
The idea is
Maybe the language should be changed to allow Iterator to be changed to have a
signature like this:
pub trait IteratorA {
fn next'a('a mut self) - Option'a A;
}
Then you could return the mut by reborrowing and would be able to advance the
iterator without issue afterwards.
No, the problem you describe does not exist in my implementation because it
requires an mut to the smart pointer.
In particular, if the reference count is 1, then there is no other Rc and Arc
pointing to the same data, and because we have an mut there is also no other
borrowed reference to the
Never use digest mode.
Instead, use normal mode and if necessary add a filter in your e-mail website
or application to separate all mailing list messages in a specific folder or
label.
___
Rust-dev mailing
Hello, I already implemented a persistent tree-map called SnapMap: you can find
the source code at https://github.com/mozilla/rust/pull/9816
I stopped working on it before I made a serious effort to push it into the Rust
codebase and don't have time to work further on it, so it would be
Did COW improve performance? What's a good way to do performance
testing of Rust code?
The reason I introduced COW when RC 1 is that it allows persistent data
structures to be mutated in place if there aren't extra references, just like
non-persistent data structures.
Lots of languages
Have you considered making deref the default instead and requiring moves to use
an explicit move keyword?
Basically, from this hypothetical syntax to current one:
- x = x
- mut x = mut x
- move x = x
One could even make the implicit in parameter types in function declarations
unless the move
I see several proposals for the future of Rust tasks, and I think one of the
best approaches is being overlooked, and that is something similar to async in
C# (http://msdn.microsoft.com/en-us/library/vstudio/hh191443.aspx).
In C#, the async keyword can be applied to functions and it causes the
This is similar to IO Monad in Haskell. Adding that to previously pure
computational code is painful, but on the other hand it does emphasis
that computational code != IO code and minimizing mixing between two
typically leads to better design overall.
Yes, but you can have the compiler
The issue with async/await is that while it maps very well to the AIO
primitives like IOCP and POSIX AIO, it doesn't map well to something
that's solid on Linux. It's just not how I/O is done on the platform.
It uses *non-blocking* I/O so scale up socket servers, with
notification of ready
Although, on second thought, one could just free the unused part of the user
mode stack whenever a thread blocks, either in the user mode code (i.e. using
madvise MADV_DONTNEED or equivalent to discard everything below the stack
pointer modulo the page size, perhaps minus the page size) or
It seems to me that trying to determine max stack size is incompatible with
dynamic linking. So even if you disallow recursion, any function that calls
a function outside of its own crate is not going to be able to trust its
calculated max stack size.
The maximum stack size needs to
What about the idea of making Result cause task failure if it is destroyed in
the error case? (as opposed to destructuring it)
This just needs a simple language change to add an attribute that would be
applied on Result to declare that it's OK to destructure it and cause drop() to
not be
Allowing one closure to take mut while another takes const would
create a data race if the two closures are executed in parallel.
Closures executable in parallel would probably have kind bounds forbidding
const:
http://smallcultfollowing.com/babysteps/blog/2013/06/11/data-parallelism-in-rust/
The issues in O'Caml seem to be due to the fact that in O'Caml function
parameter and return types are inferred, and thus accidentally oversized enum
types can propagate through them.
In Rust, they must specified by the user, so those oversized enum types will
cause an error as they are passed
I was talking about
http://smallcultfollowing.com/babysteps/blog/2012/08/24/datasort-refinements/,
which essentially introduces structural enums but only for variants belonging
to the same named enum.
* This strikes me as an extreme change to the language, but
perhaps my gut is
I was reading a proposal about adding datasort refinements to make enum
variants first-class types, and it seems to me there is a simpler and more
effective way of solving the problem.
The idea is that if A, B and C are types, then A | B | C is a structural
enum type that can be either A, B or
2. Distribute compilations and tests across a cluster of machines (like
distcc)
Compilation is 99% serial (the only things that happen in parallel
are rustpkg and rustdoc etc at the end, and they are almost nothing),
though tests could be distributed (and Graydon is working on doing
Have you considered the following non-specific quick fixes?
1. Build on a ramfs/ramdisk
2. Distribute compilations and tests across a cluster of machines (like distcc)
3. If non-parallelizable code is still the bottleneck, use the
fastest CPU possible (i.e. an overclocked Core i7
4770K,
I believe that instead of segmented stacks, the runtime should determine a
tight upper bound for stack space for the a task's function, and only allocate
a fixed stack of that size, falling back to a large C-sized stack if a bound
cannot be determined.
Such a bound can always be computed if
For Rc, it should be enough to have the GC traverse raw pointers
that have/don't have a special attribute (probably traversing by
default is the best choice), as long as the raw pointers have the
correct type.
Obviously it only makes sense to traverse types if it is possible for them to
point
This would be a big step away from the advantages of Rust's current
trait system. Right now, if the definition of a generic function type
checks, it's valid for all possible types implementing the trait
bounds. There are no hidden or implicit requirements.
Yes, but since Rust, like C++,
Every
single reviewer I showed a no cycles variant of rust to told me
it was unacceptable and they would walk away from a language that
prohibited cycles in all cases. All of them. We tried limiting it
within subsets of the type system rather than pervasively: it still
Restructuring your code to avoid cycles is problematic when you're
implementing a platform where the spec allows users to create ownership
cycles --- like, say, the Web platform. So if
Rust didn't support cyclic ownership, Servo would have to implement its own
GC and tracing code just
Date: Thu, 16 May 2013 10:58:28 -0700
From: gray...@mozilla.com
To: bill_my...@outlook.com
CC: rust-dev@mozilla.org
Subject: Re: [rust-dev] Adding exception handling as syntax sugar with
declared exceptions
On 12/05/2013 8:00 PM, Bill Myers wrote:
This is a suggestion for adding
Scala has a similar design, with the following traits:
- TraversableOnce: can be internally iterated once (has a foreach() method that
takes a closure)
- Traversable: can be internally iterated unlimited times (has a foreach()
method that takes a closure)
- Iterable: can be externally iterated
Reference counting is generally more desirable than garbage collection, since
it is simple and deterministic, and avoids scanning the whole heap of the
program, which causes pauses, destroys caches, prevents effective swapping and
requires to tolerate increasing memory usage by a multiplicative
This is a suggestion for adding an exception system to Rust that satisfies
these requirements:
1. Unwinding does NOT pass through code that is not either in a function that
declares throwing exceptions or in a try block (instead, that triggers task
failure)
2. Callers can ignore the fact that a
43 matches
Mail list logo