On 6/25/10 11:59 PM, Randy Bush wrote:
i am down 20g since reboot. /private/var/vm content has been constant.
rmac.psg.com:/Users/randy du -s .tahoe
52 .tahoe
Weird! Tahoe only touches the disk underneath it's base directory (which
is ~/.tahoe by default), so if du doesn't show
On 7/24/10 2:12 PM, Terrell Russell wrote:
Also, consider flot for standalone javascript graph building.
http://code.google.com/p/flot/
Hey, this is great! I think this will do exactly what I need.
thanks!
-Brian
___
tahoe-dev mailing list
On 7/25/10 8:10 PM, Kyle Markley wrote:
Brian,
Yeah, I think we're approximately saturating the network during the large
file transfers. But for the small files, both network and CPU load are
very low (under 10%).
Yup. I suspect that your large files are running into python's
performance
On 7/26/10 12:32 PM, Zooko O'Whielacronx wrote:
Anyway, so was wrong that repair works on immutable files and
directories given a verify cap to the thing—instead you require a
read-cap to the thing. Repair also works on mutable files and
directories given a write-cap to the thing (which means
On 7/26/10 1:49 PM, Raoul Duke wrote:
i consider myself not completely useless at
drawing/graphing/informatics etc. even though i'm not a graphic
designer by profession. anybody in the sf bay area who'd think this
sort of project would be fun to work on, on and off?
Sure! I'm in SF.. I'm
The unguessable caps make the attack payload trickier than the usual
trivial-pwnage payload, but not impossible.
Yeah, it means that the attacker cannot acquire authority (the ability
to read or write a tahoe file) by merely guessing at a URL: they have to
steal one from a tab which already
I just finished pushing the new-downloader (#798) code into trunk. This
represents about 6 months of persistent effort: it's a great relief to
finally get it published.
I'll start working on the bugs that Francois found (#1154, #1155)
tomorrow, hopefully I can get them fixed in a day or two.
On 7/29/10 11:04 PM, eurekafag wrote:
I've noticed that NAT'ed nodes require a lot of time to connect to a
new node with external IP. Something between 20 mins and an hour.
Where can I set introducer request period to make it faster? Node with
external IP can't connect to one behind the NAT so
I wasn’t sure, so I thought I’d bring it up – does Tahoe support
IPv6 yet? If not, where is it on the roadmap?
Not yet. There's a feature request filled in trac #867 and it has
been previously discussed on this mailing-list, see this post¹.
I updated #867 with a pointer to a new foolscap#155
On 8/13/10 12:03 PM, Wayne Scott wrote:
Does it matter what version the machine in the cluster are running?
-Wayne
Nope. None of the server-side code changed from 1.7 to 1.8.
The expected speedups of the new-downloader code in 1.8 are on small
files (100KB): specifically 1.8 should have much
On 9/7/10 6:29 PM, Greg Troxel wrote:
I built 1.8.0c3 on netbsd-5/i386 (via an updated pkgsrc entry not yet
committed) and it seems to work. 'tahoe check --raw' feels faster than
the previous beta and 1.7.1.
tahoe manifest seems to work reasonably quickly.
Hrm, if your directories are
For about 18 months, I've been running a personal bidirectional
git-darcs bridge, and doing all of my own Tahoe development in git.
Last night I pushed a copy of this git repository up to GitHub:
http://github.com/warner/tahoe-lafs
The master branch in that repo will exactly follow our
A little while ago, someone at Mozilla did a short presentation on
Firefox Papercuts, based upon looking at hundreds of vague complaints
thrown out to the surprisingly-caring-but-still-not-a-support-channel
winds of #firefox on Twitter (in particular, things that were annoying
enough to complain
On 10/18/10 9:25 AM, Ravi Pinjala wrote:
Yes, exactly. The possession of the file cap *is* the authorization.
(As far as whether there are any plans to implement a more traditional
authentication method, like username/password: I don't think so, but
I'll defer to the much-more-experienced
On 2010-10-18 3:32 PM, Brian Warner wrote:
- darcs is slow, hard to publish branches, hard to use local
branches,
Zooko just pointed me at the following tidbit:
http://darcs.net/manual/Best_practices.html#SECTION0053
to turn on the global patch cache. The version
On 10/19/10 7:15 PM, Siddhartha Kasivajhula wrote:
Reading http://pypi.python.org/simple/foolscap/
Reading http://foolscap.lothar.com/trac
...and then it just gets stuck here. It's been at this step for about 30
minutes now with no developments. I went to foolscap.lothar.com
On 10/20/10 5:53 AM, slush wrote:
Hi,
I found that TahoeLAFS solves support for Range header with
downloading whole file from grid and then returning a subset of original
file. Is that because downloading just of small piece of file is
impossible in Tahoe by it's design or just because
On 10/21/10 5:13 PM, Francois Deppierraz wrote:
Then, I deleted about 1000 very small files from a single directory
and had a look at how many objects of each type were created.
Wow, this is awesome.
3: a3 [-] 1 allmydata.util.dictutil.DictOfSets: 0x675a0c0
4: a4 -- [-] 1 dict
rc = hp.heap()[0].bysize[0].byid[0].rp[5].theone
verinfos = set([verinfo for (verinfo,shnum) in rc.cache.keys()])
len(verinfos)
1
So there's only one 'verinfo' value there. The size of the cache repr()
is indeed pretty big.
Huh, that knocks out my theory that ResponseCache is living
On 11/10/10 8:05 AM, Kyle Markley wrote:
On Wed, 10 Nov 2010 07:53:46 -0500, Greg Troxel g...@ir.bbn.com wrote:
I think you have a good point about symlinks. Probably you should file
a ticket so this doesn't get lost. It isn't immediately obvious how
they should work, especially because
On 11/14/10 5:48 PM, Nathan Eisenberg wrote:
No - no RAID - same disk approach as if you were running 24 nodes -
except the node is configured to use /mnt/disk1, /mnt/disk2,
/mnt/disk3, etc, instead of having a node process for /mnt/disk1,
/mnt/disk2, /mnt/disk3, etc.
In this way, a 24
On 11/15/10 3:02 AM, Francois Deppierraz wrote:
1-of-3 58
2-of-3 63
3-of-3 58
10-of-30 210
20-of-60 394
Hm. Just before 1.8.0, I was using the JS/Protovis -based download
timeline visualization tools (which didn't get landed) to investigate
the overhead of large k (i.e.
On 11/24/10 3:32 PM, Greg Troxel wrote:
Bostonian ygwe...@gmail.com writes:
I am experiencing an issue with unplugging one node from my private
grid. Here is my testbed:
1) 5 storage nodes
2) N=5, K=3
3) two clients running on my laptops
It has been working fine for normal operations.
On 11/28/10 1:15 AM, Ted Rolle Jr. wrote:
Alice writes a book; Bob typesets it for her. Can they collaborate on
this book, or is the book only available to the uploader?
Sure they can collaborate. If she uploads the book as a mutable file, or
if she uploads it as an immutable file but inside
On 11/30/10 10:58 PM, Kyle Markley wrote:
Brian et al,
Huh? Shouldn't the new upload just put new shares in place? I know
our uploader isn't particularly clever in the face of existing shares
(it will put multiple shares on one server, and in general not
achieve the ideal diversity), but it
On 12/4/10 12:42 AM, Kyle Markley wrote:
Thanks for looking at this, everyone. I'm happy that the problem is
already known, but sad there isn't a fix yet. I'm going to work around
this by recreating my grid and this time keeping expiration turned
off. I expect that will prevent these
On 12/5/10 2:47 AM, Shu Lin wrote:
Putting all nodes behind a NAT with a strict firewall and no manual
config just isn't going to work.
I don't think Tahoe has any restrict on deployment environment. Having a
server accessed through a public IP doesn't mean I can't put the server
On 12/5/10 3:52 PM, James A. Donald wrote:
A centralized coordinator is single point of failure and an additional
configuration issue. If everyone runs the same algorithm, they will
mostly agree without need for a central coordinator - though there
will never be 100% agreement. If the system
On 12/5/10 7:02 PM, Greg Troxel wrote:
There is bulk data, and thus we need TCP-friendly congestion control.
So moving away from TCP requires reinventing it.
Excellent point. uTP has a lot of congestion-management code in it, but
is designed to always yield to TCP, so it'd be good for
It's December, which means it's time to talk about Accounting again[1].
Each time we cycle around this topic, we chip away at the complexity,
prune back some of the loftier goals, haggle for a couple of weeks, then
throw up our hands and go back to our day jobs for another couple
months.
This
On 12/21/10 5:56 AM, Greg Troxel wrote:
and avoid reinventing the trust management wheel
Your comment made me realize more crisply that the real property I want
From pgp is to be able to manage keys via pgp and then easily insert
them into tahoe. I really do mean manage via and insert,
On 1/4/11 9:36 AM, Greg Troxel wrote:
If the server status page had information about which servers had
refused to take shares (and the smallest size that was refused)
recently, that would help people figure things out. Right now the only
way to tell that servers are full is to get upload
On 1/5/11 6:26 AM, slush wrote:
Hello,
after few weeks I checked logs of my storage repairs and found, that
process is permanently throwing UnhappinessError. That means I probably
lost some of my data, right? Is here some way how to fix it / skip error
and let repairer to renew other files
Author: david-sarah david-sa...@jacaranda.org
Date: Thu Dec 30 22:00:39 2010 -0800
Update foolscap version requirement to 0.6.0, to address
http://foolscap.lothar.com/trac/ticket/167
- foolscap[secure_connections] = 0.5.1,
+ # foolscap 0.6 is
On 1/5/11 2:26 PM, Carsten Krüger wrote:
What matters most is the filecap, dircap, or rootcap under which
you stored your data. You must retain access to that string.
This is only a small amount of data that never changes?
Right. Think of it like a URL that points to a whole site full of
On 1/5/11 5:15 PM, David-Sarah Hopwood wrote:
Unfortunately, there is no way that I know of to declare a conditional
dependency on 'foolscap[secure_connections] = 0.6.0' if the version
of Twisted we are using is 10.2.
Yup. I'm just venting :).
'setup.py build' should download a foolscap
On 1/11/11 7:19 AM, Shawn Willden wrote:
Specifically, what I'm wondering what happens if a client running a
deep-check --repair --add-lease tries to add a lease on an existing
share and the storage server refuses the lease renewal?
That part of the code needs some work, on both sides. At
On 1/16/11 8:39 PM, Shawn Willden wrote:
Removing the image folder doesn't help, either, because even without the
directory node, Bob could have saved the caps of the files themselves.
The only way for Alice to make them inaccessible to Bob is to wait
until expiration removes the shares of
The weekend Tahoe Review Party is rolling along, cranking out work on
the upcoming 1.8.2 release. The target code freeze is tomorrow night
(monday, say around 11pm PST). We're down to 12 tickets left to resolve,
all of which are getting really close:
packaging:
#585 bbfreeze
#668
On 1/17/11 11:37 AM, medhioub wrote:
Hello,
Any news about the management of small files with Tahoe.
I give it a try few months ago (and i like it so much) but unfortunately
with bad performance and few problems using small files.
We replaced the immutable-file downloader in version 1.8.0
On 1/16/11 4:53 AM, Greg Troxel wrote:
Command line tools for tahoe are less functional than WUI, so it's
too tempting to use the WUI, which means firefox/etc. handles caps,
which is obviously unsafe. Getting to the point where I don't want
to use the WUI beyond seeing server status
On 1/19/11 4:53 PM, Jody Harris wrote:
I think so.
open .tahoe/tahoe.cfg and look at the tublocation line.
Take out the IP:Ports you don't want to advertise.
In particular, make tub.location look like host:port,host:port for
whatever you want to advertise. If tub.location is missing, Tahoe
According to the 1.8.2 Milestone[1], we've got just two bugs left open.
#668 is probably code-complete, but it sounds like it's waiting for a
manual test before we can consider it closed. #585 is waiting one more
patch to deal with .tac files. As soon as we close those two bugs,
hopefully within
Ok, the last expected code changes have been committed, so I'm
officially closing the tree until we get 1.8.2 out the door. All
checkins must be approved by the release manager (me).
I'm currently aware of three remaining issues that block or appear to
block the release:
#668: work is
On Mon, Jan 24, 2011 at 9:27 AM, Jimmy Tang jcft...@gmail.com wrote:
I just pulled the latest update from the git mirror of tahoe
(git://github.com/warner/tahoe-lafs.git)
Some of the tests fail
[FAIL]: allmydata.test.test_client.Basic.test_versions
[FAIL]:
On 1/25/11 12:40 AM, Michael Coppola wrote:
Hey devs,
I seem to be having lots of trouble uploading large (as in, 8gb) files
to my Tahoe-LAFS network through the FTP interface. I set Filezilla to
its highest timeout, seconds, and it still hasn't finished
processing the file by the end
Also, how does the standard solution deal with GETs?
You can put the secret parameter in the URL query string, thus
defeating the porpoise.
More to the point, GETs are supposed to be idempotent and safe. Updating
your server's configuration does not fall into that category. Use only POSTs
I've just pushed the 1.8.2b1 tag for the first beta snapshot, including
an updated NEWS file but not yet updating the release notes. We're
having some problems with the automatic tarball builder (#1335), but
I've just manually copied a .tar.bz2 and .zip up to the web site:
On 1/26/11 1:22 AM, Jimmy Tang wrote:
I've just did my usual build and test, i prefer to run tahoe from
where i built it from usually, anyway here are the notes...
archlinux:green
OS-X 10.6.6: green
winxp sp3 py27 mingw: green
Excellent! Thanks for the report!
-Brian
make Tahoe-LAFS
possible.
Brian Warner
on behalf of the Tahoe-LAFS team
January 30, 2011
San Francisco, California, USA
[1] http://tahoe-lafs.org/trac/tahoe/browser/relnotes.txt?rev=4865
[2] http://tahoe-lafs.org/trac/tahoe/browser/NEWS?rev=5000
[3] http://tahoe-lafs.org/trac/tahoe/wiki
On 2/2/11 8:44 AM, Greg Troxel wrote:
Most of the nodes on VolunteerGrid2 are set up behind routers with
port-forwarding to the nodes. With the exception of one 6+ year old
router, it is working we haven't experienced any problems. (The
router in question is scheduled for
On 2/3/11 3:47 PM, Greg Troxel wrote:
Thanks. I set to
rfc1918adddress:port,dyndns-name:port
and things seemed to work. I am boggled by 127.0.0.1, but I'm trying to
get other clients behind the same NAT to work, not other clients on the
same box.
Oh, yeah, sorry about that, I
On 2/4/11 4:51 AM, Shawn Willden wrote:
Without getting into the advantages/disadvantages of that approach,
the reason I mention it is because what I've observed is that when a
disk gets an I/O error on any one of the partitions, the OS assumes
that the whole disk is having trouble and drops
On 2/7/11 7:18 PM, David-Sarah Hopwood wrote:
On 2011-02-08 02:42, Scott Dial wrote:
However, your idea about a safe web gateway is something that I had a
desire for as well, for my own personal grid. In that case, I am only
consuming my own resources by making caps known to the internet
On 2/9/11 12:20 AM, sreenivasulu velpula wrote:
When an upload happens on Tahoe grid for an immutable file, It will
give time statistics, i didn't understand some of those timings.
+ Storage Index: 1.2ms (192.3kBps)
/( I assume this is the time for hashing encryption
key and generating
On 2/9/11 2:38 PM, David-Sarah Hopwood wrote:
On 2011-02-09 19:29, Brian Warner wrote:
On 2/9/11 12:20 AM, sreenivasulu velpula wrote:
+ Encode And Push: 37 seconds (7.6kBps)
# Cumulative Encoding: 7.4ms (31.6kBps)
# Cumulative Pushing: 24ms (10.0kBps)
It seems strange that Cumulative
We currently use serverids for three things:
* A: [share-placement: claim of independent failure modes]
* B: [permutation seed]
* C: [shared secret seed]
In the future, we would like to also use serverids to:
* D: [server-selection UI handle]
* E: [Accounting handle]
As Zooko
On 2/18/11 8:02 AM, Samuel Neves wrote:
One positive point about hash-based signatures is that there's plenty
of parallelism to go around. And parallelism is what processors are
getting good at lately.
The performance of SHA-256** for signatures can be improved.
Incidentally, one
I've been thinking and talking a while about moving to Storage Clubs
in tahoe as a grid-membership management system. The idea is use an
invitation scheme: when you start your Tahoe node, you can either start
a new grid or accept an invitation to join an existing one. Grids would
be kept small
On 2/22/11 7:32 PM, Ravi Pinjala wrote:
There is a lot about this that I like. :D A few questions:
Excellent questions!
- When you talk about the tahoe-lafs.org dyndns service, you really
mean a federated system where anybody can host a grid on their own
domain, right? :) I feel like
On 2/24/11 5:37 AM, Greg Troxel wrote:
I'm not entirely clear on the 'tahoe debug catalog-shares' output, but
it seems that field 6 is the remaining lease duration. On a pubgrid
server, I find that about half the shares have 0 in this field, and
the files are quite old.
Yup, that field is
On 2/24/11 1:29 AM, sreenivasulu velpula wrote:
I have few doubts about mutable file timings.
+ Setup: 193us
* ( What is set up time ?)*
That's the time spent between the decision to modify a file and the
start of encryption, which includes
On 2/28/11 11:06 AM, David-Sarah Hopwood wrote:
Rebalancing, i.e. changing the parameters of existing files and
putting shares in the optimal places, is less automatic than we would
like it to be.
One nitpick: so far, we've been using the word rebalance to mean move
existing shares around to
On 9/24/10 12:36 AM, Zooko O'Whielacronx wrote:
The largest Tahoe-LAFS grid that has ever existed as far as I know was
the allmydata.com grid. It had about 200 storage servers, where each
one was a userspace process which had exclusive access to one spinning
disk. Each disk was 1.0 TB except
On 10/5/10 4:47 PM, Greg Troxel wrote:
I have two theories:
A) I ran 'find . -size +8192 | xargs rm' in the storage area on some
nodes to reclaim space so I could repair 1 KB files. I don't think I
did this on linuxpal, as it still has lots free, and I think the ones
I did it on
On 10/8/10 5:09 PM, Brian Warner wrote:
On 9/24/10 12:36 AM, Zooko O'Whielacronx wrote:
Whoa. I guess I wrote that message back in October and never sent it..
must have hit the wrong button this morning and it went out. Sorry for
the anachronistic confusion!
-Brian
Having read Zooko's original message more carefully, I think I was
responding to the wrong concern (which is probably why I deferred
sending that response for long enough to forget about it). Here's a
better response.
On 9/24/10 12:36 AM, Zooko O'Whielacronx wrote:
However, I'm also sure that
On 5/5/11 2:21 PM, Kenny Taylor wrote:
So global file-level deduplication = bad. Not necessarily true for
block-level dedup. Let's say we break a file into 8kb chunks, encrypt
each chunk to the user's private key, then push those chunks to the
network. The same file uploaded by different
On 4/6/11 3:29 PM, Greg Troxel wrote:
So, I find myself doing
tahoe deep-check -d ~/.tahoe-gdt --add-lease --repair
tahoe deep-check -d ~/.tahoe-pubgrid --add-lease --repair
and naturally would like to have aliases
tahoep () { tahoe -d ${HOME}/.tahoe-pubgrid $*; }
tahoeg () {
On 5/19/11 2:56 AM, berta...@ptitcanardnoir.org wrote:
For the sake of the Debian package, I've written a manpage for bin/tahoe.
You can find it in the branch feature/manpage of the Debian package git
repo. [1]
I'd be honored if you'd accept to merge it into the upstream repo, and
On 6/3/11 11:45 AM, Zooko O'Whielacronx wrote:
On Fri, Jun 3, 2011 at 4:28 AM, Greg Troxel g...@ir.bbn.com wrote:
But, in wiki:Capabilities, it says that a directory is just a mutable
file with special interpretation.
This is perhaps a misleading statement. The special interpretation
I was thinking a bit about Accounting this morning, as I've recently
been reviewing my notes and code branches from the last few years in
preparation for another development push.
Most of our designs concentrate storage authority (the right to
consume space on somebody else's server) in a
We're getting closer to a 1.9 release.. I agreed to be Release Manager
months ago, but then got caught up in other responsibilities and have
been slacking in my RM duties. But here's the plan we talked about on
IRC/phone today:
* new features: to make the 1.9 train, new features must have a
On 7/18/11 1:35 PM, Zooko O'Whielacronx wrote:
By supporting here, I mean going out of our way to provide binary
eggs of dependencies, and actively soliciting people to run
buildslaves.
Let's do those things only for Python 2.7, and not for other versions
of Python, on Windows. Also let's
On 7/29/11 5:39 PM, Kevin Reid wrote:
Given the two origins, the only way you are in danger is if you have
two “raw” (from origin #2) documents open in your browser at once
The tab's history is also an angle of attack, so I think another danger
is to open two different documents in sequence in
The upcoming 1.9 release is like your favorite reality TV show: 50
tickets enter the arena, but which ones will survive to reach beta1?
Root for your favorites by getting patches ready for review. Vote your
least favorites off the release train and condemn them to 1.10.
Excitement!
Last night
On 8/4/11 3:14 PM, Olaf TNSB wrote:
Therefore, people who redistribute tahoe-lafs in some container which
isn't a darcs repository should include a src/allmydata/_version.py
with it. If they don't, you'll get version == 'unknown'. Those
tests that you saw fail are there to make sure that
On 8/1/11 10:16 PM, Joseph Ming wrote:
The opposite question is also relevant I think: what extra precautions
does tahoe take to protect the user from losing their root cap? If I
understand the design correctly, without the root cap (or access to
some stored cap somewhere) the user won't be
Hi everyone.. just wanted to give an update on the progress of 1.9.
MDMF has been taking longer than we expected to review and land..
there's a lot of code! But we're planning to land it tomorrow, because:
* we've committed to having MDMF in 1.9 (i.e. we'll delay 1.9 as
necessary to get
This morning I finally tagged 1.9.0a1, available as a tarball in here:
http://tahoe-lafs.org/source/tahoe-lafs/tarballs/
(look for allmydata-tahoe-1.9.0a1.tar.bz2, or .zip, or -SUMO.zip if
you want the dependencies too)
or through version control with:
darcs get --lazy
Just a quick update:
* MDMF cleanup continues. I found and fixed a few significant problems
over the weekend (#1510, and an unsafe dependency upon the ordering
of dict.values()). I also added 'tahoe debug' support for MDMF
(dump-cap, dump-share, catalog-shares), and fixed some
We're deep in the guts of the new MDMF code, cleaning things up and
fixing bugs that we're not comfortable leaving in place for the 1.9
release. Some of the bugs have provoked us to restructure the code a
bit, so it's taking some time.
My current expectation is that we'll release an alpha2 next
I've spent the last week running some performance tests on trunk (which
will behave very much like the upcoming 1.9 release), using four
LAN-connected hosts generously donated by Atlas Networks.
The report is up on the wiki here:
http://tahoe-lafs.org/trac/tahoe-lafs/wiki/Performance/Sep2011
On 9/23/11 6:33 PM, Zooko O'Whielacronx wrote:
Could you provide us with the New Visualizer display of a small-K
immutable download and of a large-K immutable download so that we can
compare the two?
Hm, that's an interesting question. I could provide a screen capture of
such a display, and if
On 9/23/11 6:33 PM, Zooko O'Whielacronx wrote:
Could you provide us with the New Visualizer display of a small-K
immutable download and of a large-K immutable download so that we can
compare the two?
Hm, that's an interesting question. I could provide a screen capture of
such a display, and
I've finally finished tagging 1.9.0alpha2, the second milestone towards
the upcoming 1.9 release. We've made a number of improvements to MDMF
since alpha1, so grab yourself a copy and test it out:
http://tahoe-lafs.org/source/tahoe-lafs/tarballs/allmydata-tahoe-1.9.0a2.tar.gz
(also .zip, or
While going over my Accounting work[1] this morning, I had an idea about
simplifying the backend storage share-file format. I'd like to remove
the lease information from the share files themselves, and use a
separate per-server sqlite database (the LeaseDB) to hold all lease
data. I wanted to
Ok, we finally landed the web-frontend changes to make it easier to use
MDMF files (#1552, the upload a file form now has a three-option
radiobox for CHK/SDMF/MDMF instead of one toggle for mutable and a
second two-option radiobox for SDMF/MDMF).
I've added one more blocker for b1 (#1561), which
We fixed the last major MDMF blocker tonight (#1561), so we're finally
clear to tag beta1. This means the web-API and CLI arguments are stable,
and we're comfortable with supporting them for a while.
The changes since alpha2 are pretty minor:
* CLI: use tahoe put --format=MDMF to create an
On 10/14/11 8:04 AM, Greg Troxel wrote:
Thanks for 1) making the unpacked dir match the version in the file and
Glad that helps.. it's part of the tarball-generation automation, so it
should always work that way.
I am assuming while a new foolscap release is pending, 1.9.0 will not
require
On 10/14/11 10:57 AM, David-Sarah Hopwood wrote:
There is custom magic in setup.py that creates bin/tahoe using a
shebang for the interpreter that ran setup.py;
Right, bin/tahoe is the only entry point, so it's the only script that
needs this treatment.
but the Makefile just does this:
On 10/25/11 9:20 AM, Dirk Loss wrote:
To foster my understanding, I've tried to visualize what that means:
http://dirk-loss.de/tahoe-lafs_nhk-defaults.png
Wow, that's an awesome picture!. If we ever get to produce an animated
cartoon of precious shares and valiant servers and menacing
Some summit notes:
* I've got conference room space reserved at the Mozilla SF office (2
Harrison, at Spear St.) for 8-5pm each day, tuesday 8-Nov through
friday 11-Nov. We have to jump around a bit because of some
pre-existing meetings, but most of the rooms can accomodate 6-8 people
and
I'd love to release 1.9 this weekend. I've given up on most of the
lingering docs tickets. I have some more manual testing to do (in
particular updating the tahoe-deps tarball and testing against it),
but we're pretty close.
One problem that surfaced, which I need someone to investigate:
Ok, following Zooko's suggestion (and our practice from the previous
PyCrypto-2.3), I'm going with the host binary eggs of pycrypto-2.4 so
tahoe installations won't try to compile it themselves route.
I've built binary .eggs of PyCrypto-2.4 for the two platforms available
to me (py2.6-linux-i686
. Thank you very much to the
team of hackers in the public interest who make Tahoe-LAFS
possible.
Brian Warner
on behalf of the Tahoe-LAFS team
October 31, 2011
San Francisco, California, USA
[1] https://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/NEWS.rst?rev=5356
[2] https://tahoe-lafs.org/trac
Now that 1.9 is out the door, I'm finally landing a summer's worth of
backlogged code. The first to arrive is a newly-rewritten Download
Status Timeline Visualizer (or viz for short). 1.9.0 included an early
version of this: today's patch replaces the internals to use d3.js
instead of
Ok, thanks to Samuel and Vladimir for their PyCrypto-2.4 eggs. Now we
only need eggs for the following platforms to achieve parity with the
previous 2.3 version:
py2.6-freebsd-8.1-RELEASE-i386
py2.6-linux-x86_64
py2.6-netbsd-5.0.2-i386
py2.6-win32
py2.7-win32
The set of platforms for
As the last step of the migration to new tahoe-lafs.org servers, we
finally shut down the old boxes. Unfortunately, I forgot that most of
our buildslaves are connecting through the old address (via a TCP
relay), and now they've gone offline.
If you run a Tahoe buildslave, could you please edit
On 11/4/11 6:26 PM, Kyle Markley wrote:
The URL actually points to a path that doesn't exist. The zfec source
isn't on the web server.
When I try to build tahoe-lafs instead of zfec, I get a similar error,
but for this path:
https://tahoe-lafs.org/source/tahoe-lafs/trunk/_darcs/inventory
1 - 100 of 291 matches
Mail list logo