Re: [rust-dev] Detection of early end for TakeIterator

2014-05-31 Thread raphael catolino
In this case I think it would be simpler to just zip the iterator with
range(0, n) than implement a whole Counted iterator.
On May 31, 2014 4:02 AM, Kevin Ballard ke...@sb.org wrote:

 I suspect a more generally interesting solution would be a Counted
 iterator adaptor that keeps track of how many non-None values it's returned
 from next(). You could use this to validate that your Take iterator
 returned the expected number of values.

 pub struct CountedT {
 iter: T,
 /// Incremented by 1 every time `next()` returns a non-`None` value
 pub count: uint
 }

 implA, T: IteratorA IteratorA for CountedT {
 fn next(mut self) - OptionA {
 match self.iter.next() {
 x@Some(_) = {
 self.count += 1;
 x
 }
 None = None
 }
 }

 fn size_hint(self) - (uint, Optionuint) {
 self.iter.size_hint()
 }
 }

 // plus various associated traits like DoubleEndedIterator

 -Kevin

 On May 30, 2014, at 9:31 AM, Andrew Poelstra apoels...@wpsoftware.net
 wrote:

  Hi guys,
 
 
  Take is an iterator adaptor which cuts off the contained iterator after
  some number of elements, always returning None.
 
  I find that I need to detect whether I'm getting None from a Take
  iterator because I've read all of the elements I expected or because the
  underlying iterator ran dry unexpectedly. (Specifically, I'm parsing
  some data from the network and want to detect an early EOM.)
 
 
  This seems like it might be only me, so I'm posing this to the list: if
  there was a function Take::is_done(self) - bool, which returned whether
  or not the Take had returned as many elements as it could, would that be
  generally useful?
 
  I'm happy to submit a PR but want to check that this is appropriate for
  the standard library.
 
 
 
  Thanks
 
  Andrew
 
 
 
  --
  Andrew Poelstra
  Mathematics Department, University of Texas at Austin
  Email: apoelstra at wpsoftware.net
  Web:   http://www.wpsoftware.net/andrew
 
  If they had taught a class on how to be the kind of citizen Dick Cheney
  worries about, I would have finished high school.   --Edward Snowden
 
  ___
  Rust-dev mailing list
  Rust-dev@mozilla.org
  https://mail.mozilla.org/listinfo/rust-dev


 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] rusti - the - repl renovation

2013-09-20 Thread raphael catolino
 Perhaps the best thing is just to fork(2), so we get a new (OS level)
 process that has copy-on-write (virtual) memory, and if the compilation +
 run succeeds in the child, then have the child take over. Otherwise the
 child dies with an fail! + location message, and we return to the parent
 exactly before the child was spawned.

Do you intend for rusti to work on windows? Because I'm not sure you
could do something like that efficiently there.

On Fri, Sep 20, 2013 at 8:40 AM, Jason E. Aten j.e.a...@gmail.com wrote:
 On Thu, Sep 19, 2013 at 11:51 AM, Alex Crichton a...@crichton.co wrote:

 Basically, I'm OK with leaving out tasks/spawned tasks from rusti, but
 I think that it should be important to be able to fail! and have the
 repl state intact afterwards.


 Agreed. I'm convinced that fail! should result in an almost-magical lets
 pretend that never happened jump back in time.

 I'm still trying to figure out how to do this efficiently. For code that has
 alot of state, serializing and deserializing everything will be too slow.

 Perhaps the best thing is just to fork(2), so we get a new (OS level)
 process that has copy-on-write (virtual) memory, and if the compilation +
 run succeeds in the child, then have the child take over. Otherwise the
 child dies with an fail! + location message, and we return to the parent
 exactly before the child was spawned.


 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Question about threads (that arose while trying to write a FUSE interface)

2013-08-10 Thread raphael catolino
Hi Micah, I don't have much experience with rust but I stumbled onto a
similar problem a few weeks ago. What I found out was that calling
rust functions from code that have no rust-thread context will fail
most often than not. Except if those functions are simple wrappers
around extern c functions.

If you run something like that http://pastebin.mozilla.org/2824359,
the c function runs fine but the `println` aborts with fatal runtime
error: thread-local pointer is null. bogus! (and some Lovecraft
excerpt??). If you're not in a rust thread you can't access the tls so
I'd say you can expect most std::* functions that allocate/access
tlsmemory to fail the same way.

Have you thought about patching libfuse to use rust threads instead?
This would probably be a whole lot of work, but then I think you could
use rust in your fuse callbacks transparently. Another, probably
simpler/saner, way would be to use the single-threaded option and then
spawn a new rust thread in each callback. You have to ensure that the
callbacks are called from a rust context though. I think that libufse
forks into the background when
you call fuse_main() which might make you lose it (if libfuse uses
some 'clone()' call instead of fork()/daemon()). In this case you
should pass the '-f' option to fuse_main() to prevent it and you
should retain the rust context.

Good luck!

On Sat, Aug 10, 2013 at 8:30 AM, Micah Chalmer mi...@micahchalmer.net wrote:
 Hi Rust devs,

 What can I assume about the safety of having a C library spawn its own 
 threads, from which it calls function pointers to extern functions written 
 in rust?  My guess (and only a guess--I have no actual reasoning behind it) 
 would be something like this:

   * I cannot expect anything involving tasks or the task scheduler to work 
 properly in this situation.  That means no spawning tasks, no use of ports 
 and chans, and no use of @ boxes.
   * I can expect plain c-like rust code to work, subject to the same rules 
 about thread safety as the equivalent C code.  That could include borrowed 
 references and ~ boxes.  Rust's rules ensure basic memory integrity within 
 each thread (except for unsafe blocks/fns), but I would still have to be 
 aware of potential race conditions, just as if I were writing C in the same 
 situation.

 If my guesses are true, how much of the standard library can I use?  What 
 functions make assumptions about threads and the runtime behind the scenes in 
 non-obvious ways?  Is there a way to know?  Will there be?

 If my guesses are false, I would appreciate a correct view of the situation.

 I've already been able to get very simple c-like rust code to work in a 
 situation like this, but I haven't done enough testing to have any confidence 
 in it.  There could be hidden race conditions/crashes that would eventually 
 appear, even for the simple case--my tests may have just accidentally 
 worked.

 Why this came up for me:

 As a curiosity project, I decided I'd see if I could write an interface to 
 the Fuse library  to the point where I'd be able to create a working 
 userspace filesystem in rust.  At this point all it does is implement the 
 minimum interface required to get a rust version of the FUSE tutorial 
 hellofs working.

 When writing a filesystem using the high-level FUSE API, the filesystem is 
 expected to call the fuse_main function and pass it a structure full of 
 pointers to its own functions.  FUSE then runs its main loop and calls the 
 passed-in functions when a filesystem operation is requested.  By default, 
 FUSE will spawn its own OS threads and may call the filesystem functions from 
 any of them.  It's possible to force the library to run single-threaded, but 
 at the cost of performance--it can no longer perform more than one file 
 system operation at a time.

 You can see the barely-started WIP at 
 https://github.com/MicahChalmer/rust-fuse if you're curious.  I plan to post 
 again to the list when it's in some sort of shape that would be worth looking 
 at for its own sake, but I'm writing now because of the question above.

 Thanks

 -Micah
 ___
 Rust-dev mailing list
 Rust-dev@mozilla.org
 https://mail.mozilla.org/listinfo/rust-dev
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


[rust-dev] Signal handling

2013-07-15 Thread raphael catolino
Hi everyone,
I've been writing an interface for the linux Unix-domain socket api. Upon
the program termination I must ensure that any socket file created must be
unlinked.
To deal with that I added a call to unlink in the destructor of the struct
I use to hold the socket descriptor. This works well when the program
terminates normally, however when it's terminated by a signal, the
destructor isn't called and the file never gets unlinked.

So I added an interface for the sigaction api which allows me to catch
stuff like SIGINT/SIGTERM but when I call a function in libstd from inside
the handler, it ends up calling abort(). I can call local functions as long
as they don't call more complex functions, and I can call native libc
functions as well (unlink works just fine actually).

Is that to be expected because of rust implementation? As i understood, the
stack segments representation in memory differs from the C one, but I'm not
sure if that means rust code can't work in a signal handler stack. Or maybe
just a problem with the context from inside the signal handler?

btw here are some back traces i get when trying to call println() or send()
from the signal handler :
http://pastebin.mozilla.org/2635629
http://pastebin.mozilla.org/2635630

Raphael
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev