Author: nextgens
Date: 2006-01-18 20:14:19 +0000 (Wed, 18 Jan 2006)
New Revision: 7878

Added:
   trunk/apps/yafi/COPYING
   trunk/apps/yafi/Changelog
   trunk/apps/yafi/Makefile
   trunk/apps/yafi/README
   trunk/apps/yafi/fcp.py
   trunk/apps/yafi/fec.py
   trunk/apps/yafi/fec/
   trunk/apps/yafi/fec/Makefile
   trunk/apps/yafi/fec/fec.c
   trunk/apps/yafi/fec/fec.h
   trunk/apps/yafi/fec/onionfecmodule.c
   trunk/apps/yafi/freenet.py
   trunk/apps/yafi/metadata.py
   trunk/apps/yafi/mimedata.py
   trunk/apps/yafi/onionfec_a_1_2.py
   trunk/apps/yafi/yafi
Log:
Here is yafi-20040801.tar.bz2 ... The latest version I'm aware of



Added: trunk/apps/yafi/COPYING
===================================================================
--- trunk/apps/yafi/COPYING     2006-01-18 20:12:22 UTC (rev 7877)
+++ trunk/apps/yafi/COPYING     2006-01-18 20:14:19 UTC (rev 7878)
@@ -0,0 +1,340 @@
+                   GNU GENERAL PUBLIC LICENSE
+                      Version 2, June 1991
+
+ Copyright (C) 1989, 1991 Free Software Foundation, Inc.
+                       59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+                           Preamble
+
+  The licenses for most software are designed to take away your
+freedom to share and change it.  By contrast, the GNU General Public
+License is intended to guarantee your freedom to share and change free
+software--to make sure the software is free for all its users.  This
+General Public License applies to most of the Free Software
+Foundation's software and to any other program whose authors commit to
+using it.  (Some other Free Software Foundation software is covered by
+the GNU Library General Public License instead.)  You can apply it to
+your programs, too.
+
+  When we speak of free software, we are referring to freedom, not
+price.  Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+this service if you wish), that you receive source code or can get it
+if you want it, that you can change the software or use pieces of it
+in new free programs; and that you know you can do these things.
+
+  To protect your rights, we need to make restrictions that forbid
+anyone to deny you these rights or to ask you to surrender the rights.
+These restrictions translate to certain responsibilities for you if you
+distribute copies of the software, or if you modify it.
+
+  For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must give the recipients all the rights that
+you have.  You must make sure that they, too, receive or can get the
+source code.  And you must show them these terms so they know their
+rights.
+
+  We protect your rights with two steps: (1) copyright the software, and
+(2) offer you this license which gives you legal permission to copy,
+distribute and/or modify the software.
+
+  Also, for each author's protection and ours, we want to make certain
+that everyone understands that there is no warranty for this free
+software.  If the software is modified by someone else and passed on, we
+want its recipients to know that what they have is not the original, so
+that any problems introduced by others will not reflect on the original
+authors' reputations.
+
+  Finally, any free program is threatened constantly by software
+patents.  We wish to avoid the danger that redistributors of a free
+program will individually obtain patent licenses, in effect making the
+program proprietary.  To prevent this, we have made it clear that any
+patent must be licensed for everyone's free use or not licensed at all.
+
+  The precise terms and conditions for copying, distribution and
+modification follow.
+
+                   GNU GENERAL PUBLIC LICENSE
+   TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+  0. This License applies to any program or other work which contains
+a notice placed by the copyright holder saying it may be distributed
+under the terms of this General Public License.  The "Program", below,
+refers to any such program or work, and a "work based on the Program"
+means either the Program or any derivative work under copyright law:
+that is to say, a work containing the Program or a portion of it,
+either verbatim or with modifications and/or translated into another
+language.  (Hereinafter, translation is included without limitation in
+the term "modification".)  Each licensee is addressed as "you".
+
+Activities other than copying, distribution and modification are not
+covered by this License; they are outside its scope.  The act of
+running the Program is not restricted, and the output from the Program
+is covered only if its contents constitute a work based on the
+Program (independent of having been made by running the Program).
+Whether that is true depends on what the Program does.
+
+  1. You may copy and distribute verbatim copies of the Program's
+source code as you receive it, in any medium, provided that you
+conspicuously and appropriately publish on each copy an appropriate
+copyright notice and disclaimer of warranty; keep intact all the
+notices that refer to this License and to the absence of any warranty;
+and give any other recipients of the Program a copy of this License
+along with the Program.
+
+You may charge a fee for the physical act of transferring a copy, and
+you may at your option offer warranty protection in exchange for a fee.
+
+  2. You may modify your copy or copies of the Program or any portion
+of it, thus forming a work based on the Program, and copy and
+distribute such modifications or work under the terms of Section 1
+above, provided that you also meet all of these conditions:
+
+    a) You must cause the modified files to carry prominent notices
+    stating that you changed the files and the date of any change.
+
+    b) You must cause any work that you distribute or publish, that in
+    whole or in part contains or is derived from the Program or any
+    part thereof, to be licensed as a whole at no charge to all third
+    parties under the terms of this License.
+
+    c) If the modified program normally reads commands interactively
+    when run, you must cause it, when started running for such
+    interactive use in the most ordinary way, to print or display an
+    announcement including an appropriate copyright notice and a
+    notice that there is no warranty (or else, saying that you provide
+    a warranty) and that users may redistribute the program under
+    these conditions, and telling the user how to view a copy of this
+    License.  (Exception: if the Program itself is interactive but
+    does not normally print such an announcement, your work based on
+    the Program is not required to print an announcement.)
+
+These requirements apply to the modified work as a whole.  If
+identifiable sections of that work are not derived from the Program,
+and can be reasonably considered independent and separate works in
+themselves, then this License, and its terms, do not apply to those
+sections when you distribute them as separate works.  But when you
+distribute the same sections as part of a whole which is a work based
+on the Program, the distribution of the whole must be on the terms of
+this License, whose permissions for other licensees extend to the
+entire whole, and thus to each and every part regardless of who wrote it.
+
+Thus, it is not the intent of this section to claim rights or contest
+your rights to work written entirely by you; rather, the intent is to
+exercise the right to control the distribution of derivative or
+collective works based on the Program.
+
+In addition, mere aggregation of another work not based on the Program
+with the Program (or with a work based on the Program) on a volume of
+a storage or distribution medium does not bring the other work under
+the scope of this License.
+
+  3. You may copy and distribute the Program (or a work based on it,
+under Section 2) in object code or executable form under the terms of
+Sections 1 and 2 above provided that you also do one of the following:
+
+    a) Accompany it with the complete corresponding machine-readable
+    source code, which must be distributed under the terms of Sections
+    1 and 2 above on a medium customarily used for software interchange; or,
+
+    b) Accompany it with a written offer, valid for at least three
+    years, to give any third party, for a charge no more than your
+    cost of physically performing source distribution, a complete
+    machine-readable copy of the corresponding source code, to be
+    distributed under the terms of Sections 1 and 2 above on a medium
+    customarily used for software interchange; or,
+
+    c) Accompany it with the information you received as to the offer
+    to distribute corresponding source code.  (This alternative is
+    allowed only for noncommercial distribution and only if you
+    received the program in object code or executable form with such
+    an offer, in accord with Subsection b above.)
+
+The source code for a work means the preferred form of the work for
+making modifications to it.  For an executable work, complete source
+code means all the source code for all modules it contains, plus any
+associated interface definition files, plus the scripts used to
+control compilation and installation of the executable.  However, as a
+special exception, the source code distributed need not include
+anything that is normally distributed (in either source or binary
+form) with the major components (compiler, kernel, and so on) of the
+operating system on which the executable runs, unless that component
+itself accompanies the executable.
+
+If distribution of executable or object code is made by offering
+access to copy from a designated place, then offering equivalent
+access to copy the source code from the same place counts as
+distribution of the source code, even though third parties are not
+compelled to copy the source along with the object code.
+
+  4. You may not copy, modify, sublicense, or distribute the Program
+except as expressly provided under this License.  Any attempt
+otherwise to copy, modify, sublicense or distribute the Program is
+void, and will automatically terminate your rights under this License.
+However, parties who have received copies, or rights, from you under
+this License will not have their licenses terminated so long as such
+parties remain in full compliance.
+
+  5. You are not required to accept this License, since you have not
+signed it.  However, nothing else grants you permission to modify or
+distribute the Program or its derivative works.  These actions are
+prohibited by law if you do not accept this License.  Therefore, by
+modifying or distributing the Program (or any work based on the
+Program), you indicate your acceptance of this License to do so, and
+all its terms and conditions for copying, distributing or modifying
+the Program or works based on it.
+
+  6. Each time you redistribute the Program (or any work based on the
+Program), the recipient automatically receives a license from the
+original licensor to copy, distribute or modify the Program subject to
+these terms and conditions.  You may not impose any further
+restrictions on the recipients' exercise of the rights granted herein.
+You are not responsible for enforcing compliance by third parties to
+this License.
+
+  7. If, as a consequence of a court judgment or allegation of patent
+infringement or for any other reason (not limited to patent issues),
+conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License.  If you cannot
+distribute so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you
+may not distribute the Program at all.  For example, if a patent
+license would not permit royalty-free redistribution of the Program by
+all those who receive copies directly or indirectly through you, then
+the only way you could satisfy both it and this License would be to
+refrain entirely from distribution of the Program.
+
+If any portion of this section is held invalid or unenforceable under
+any particular circumstance, the balance of the section is intended to
+apply and the section as a whole is intended to apply in other
+circumstances.
+
+It is not the purpose of this section to induce you to infringe any
+patents or other property right claims or to contest validity of any
+such claims; this section has the sole purpose of protecting the
+integrity of the free software distribution system, which is
+implemented by public license practices.  Many people have made
+generous contributions to the wide range of software distributed
+through that system in reliance on consistent application of that
+system; it is up to the author/donor to decide if he or she is willing
+to distribute software through any other system and a licensee cannot
+impose that choice.
+
+This section is intended to make thoroughly clear what is believed to
+be a consequence of the rest of this License.
+
+  8. If the distribution and/or use of the Program is restricted in
+certain countries either by patents or by copyrighted interfaces, the
+original copyright holder who places the Program under this License
+may add an explicit geographical distribution limitation excluding
+those countries, so that distribution is permitted only in or among
+countries not thus excluded.  In such case, this License incorporates
+the limitation as if written in the body of this License.
+
+  9. The Free Software Foundation may publish revised and/or new versions
+of the General Public License from time to time.  Such new versions will
+be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+Each version is given a distinguishing version number.  If the Program
+specifies a version number of this License which applies to it and "any
+later version", you have the option of following the terms and conditions
+either of that version or of any later version published by the Free
+Software Foundation.  If the Program does not specify a version number of
+this License, you may choose any version ever published by the Free Software
+Foundation.
+
+  10. If you wish to incorporate parts of the Program into other free
+programs whose distribution conditions are different, write to the author
+to ask for permission.  For software which is copyrighted by the Free
+Software Foundation, write to the Free Software Foundation; we sometimes
+make exceptions for this.  Our decision will be guided by the two goals
+of preserving the free status of all derivatives of our free software and
+of promoting the sharing and reuse of software generally.
+
+                           NO WARRANTY
+
+  11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
+FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN
+OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
+PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
+OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS
+TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE
+PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
+REPAIR OR CORRECTION.
+
+  12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
+REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
+INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
+OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
+TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
+YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
+PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGES.
+
+                    END OF TERMS AND CONDITIONS
+
+           How to Apply These Terms to Your New Programs
+
+  If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+  To do so, attach the following notices to the program.  It is safest
+to attach them to the start of each source file to most effectively
+convey the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+    <one line to give the program's name and a brief idea of what it does.>
+    Copyright (C) <year>  <name of author>
+
+    This program is free software; you can redistribute it and/or modify
+    it under the terms of the GNU General Public License as published by
+    the Free Software Foundation; either version 2 of the License, or
+    (at your option) any later version.
+
+    This program is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+    GNU General Public License for more details.
+
+    You should have received a copy of the GNU General Public License
+    along with this program; if not, write to the Free Software
+    Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+
+
+Also add information on how to contact you by electronic and paper mail.
+
+If the program is interactive, make it output a short notice like this
+when it starts in an interactive mode:
+
+    Gnomovision version 69, Copyright (C) year name of author
+    Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+    This is free software, and you are welcome to redistribute it
+    under certain conditions; type `show c' for details.
+
+The hypothetical commands `show w' and `show c' should show the appropriate
+parts of the General Public License.  Of course, the commands you use may
+be called something other than `show w' and `show c'; they could even be
+mouse-clicks or menu items--whatever suits your program.
+
+You should also get your employer (if you work as a programmer) or your
+school, if any, to sign a "copyright disclaimer" for the program, if
+necessary.  Here is a sample; alter the names:
+
+  Yoyodyne, Inc., hereby disclaims all copyright interest in the program
+  `Gnomovision' (which makes passes at compilers) written by James Hacker.
+
+  <signature of Ty Coon>, 1 April 1989
+  Ty Coon, President of Vice
+
+This General Public License does not permit incorporating your program into
+proprietary programs.  If your program is a subroutine library, you may
+consider it more useful to permit linking proprietary applications with the
+library.  If this is what you want to do, use the GNU Library General
+Public License instead of this License.

Added: trunk/apps/yafi/Changelog
===================================================================
--- trunk/apps/yafi/Changelog   2006-01-18 20:12:22 UTC (rev 7877)
+++ trunk/apps/yafi/Changelog   2006-01-18 20:14:19 UTC (rev 7878)
@@ -0,0 +1,34 @@
+20040506:
+- Initial release
+
+20040517:
+- Insertion
+- Successful splitfile segments are cached locally
+- Lots of bugfixes and code cleanups
+
+20040517:
+- Fixed some obnoxious bugs
+
+20040530:
+- Fixed a CHK gen bug
+- Fixed a major splitfile uploading bug.
+- Temporarily removed multisegment splitfile uploading.
+- Removed the upload temp file stuff; it was stupid.
+
+20040611:
+- Added splitfile healing
+- Display changes (improvements?)
+- Various bug fixes
+
+20040619:
+- Fixed a stupid bug that was causing inserts to time out
+- Put the upload temp file stuff back in; it wasn't really so stupid
+- Some refactoring
+
+20040801:
+- Healing now works correctly for large files
+- The FEC code is more modular; this will make things much easier when the
+  Freenet maximum file size changes
+- The FCPSocket now buffers properly; it no longer sucks
+- Colorized console output
+

Added: trunk/apps/yafi/Makefile
===================================================================
--- trunk/apps/yafi/Makefile    2006-01-18 20:12:22 UTC (rev 7877)
+++ trunk/apps/yafi/Makefile    2006-01-18 20:14:19 UTC (rev 7878)
@@ -0,0 +1,26 @@
+FILES = yafi fcp.py fec.py freenet.py metadata.py mimedata.py 
onionfec_a_1_2.py COPYING Makefile README Changelog
+FECFILES = fec/fec.c fec/fec.h fec/onionfecmodule.c fec/Makefile
+TARBALL = yafi-$(shell date +%Y%m%d).tar.bz2
+FEC8 = onionfec8.so
+FEC16 = onionfec16.so
+
+all: $(FEC8)
+
+$(FEC8): $(FECFILES)
+       make -C fec $@
+       cp fec/$@ .
+
+$(FEC16): $(FECFILES)
+       make -C fec $@
+       cp fec/$@ .
+
+dist: $(TARBALL)
+
+$(TARBALL): $(FILES) $(FECFILES)
+       rm -f yafi-*.tar.bz2
+       tar cjf $(TARBALL) $(FILES) $(FECFILES)
+
+clean:
+       rm -f *.so
+       make -C fec clean
+

Added: trunk/apps/yafi/README
===================================================================
--- trunk/apps/yafi/README      2006-01-18 20:12:22 UTC (rev 7877)
+++ trunk/apps/yafi/README      2006-01-18 20:14:19 UTC (rev 7878)
@@ -0,0 +1,53 @@
+Yet Another Freenet Interface
+-----------------------------
+by Tailchaser
+
+INSTALL
+-----------------------------
+Type 'make'.
+Copy *.py and onionfec.so to a place where Python can find them.
+
+USAGE
+-----------------------------
+This program is designed to be powerful and flexible, not necessarily easy to
+learn.  There will be an easy-to-use GUI someday.
+Quick help:
+'yafi get <key>' will fetch <key>, and save it to the current directory, using
+  the part of the key after the final '/' as the filename.
+'yafi get -f<filename> <key>' will save <key> as <filename>
+'yafi get <key-1> ... <key-n>' will save all the keys; a -f switch is ignored.
+'yafi get -i<file>' will read the keys to download from <file>; <file> should
+  contain one key per line.  Any line that doesn't look like a key is ignored.
+  This is very useful for downloading lots of small files, like a picture
+  collection.  Save the keys as 'keys.txt', and run 'yafi get -ikeys.txt -s'
+  repeatedly until all the files are downloaded.  Without the '-o' switch,
+  existing files won't be downloaded again, so when you run this repeatedly it
+  will only try to get stuff that hasn't already succeeded.
+
+'yafi -h' will print some help about the other switches.
+'-v' will print a lot of information about the download.
+'-a', '-t', and '-r' are useful for splitfile downloads.
+
+Insertion now works, but it might be a little flaky; try
+'yafi put -v <filename>'
+Most of the options that would make sense effect insertion.
+
+NOTES
+----------------------------
+This program is Free Software, licensed under the GPL except where otherwise
+noted (some stuff in the fec directory isn't written by me, but it's all Free).
+
+I don't use Windows, and don't have access to a Windows development box, so
+don't expect me to ever release a precompiled Windows version.  Everything
+should be portable, though, and if you want to port it I'll offer any help I
+can.
+
+If you want to contact me, use the yafi-public board on Frost.
+I also frequently idle on the #Freenet channel on Freenode, if you don't mind
+non-anonymous communication.
+
+TODO
+-----------------------------
+A GUI; this will probably be Qt-based.
+Colorize CLI output; I don't know how to do this portably.
+

Added: trunk/apps/yafi/fcp.py
===================================================================
--- trunk/apps/yafi/fcp.py      2006-01-18 20:12:22 UTC (rev 7877)
+++ trunk/apps/yafi/fcp.py      2006-01-18 20:14:19 UTC (rev 7878)
@@ -0,0 +1,719 @@
+"""A module for communicating with Freenet nodes via the FCP interface.
+
+Documentation on the FCP interface can be found at:
+    http://freenet.sourceforge.net/index.php?page=fcp
+"""
+
+import socket
+import threading
+import time
+from pprint import pprint
+import sys
+import copy
+
+
+class FCP:
+    """The main FCP class.
+
+       Instantiate with: FCP([host[, port]])
+           host - host's name (default: localhost)
+           port - port number (default: 8481)
+    """
+
+    def __init__(self, host='localhost', port=8481, statevent=None):
+        self.host = host
+        self.port = port
+        self._statevent = statevent
+        # Call hello() so we'll get a ConnectError if the node isn't talking.
+        self.hello()
+        self._statlock = threading.Lock()
+        self._setstatus('Stopped')
+        self._done = False
+
+    def __str__(self):
+        return "FCP connection to %s:%s." % (self.host, self.port)
+
+    def hello(self):
+        "Handshake with a ClientHello message."
+        self._sock = FCPSocket(self.host, self.port)
+        self._sock.send('ClientHello\nEndMessage\n')
+        res = self._sock.readline()
+        self._checkerror(res, self._sock)
+        res += self._sock.recvall(1024)
+        self._sock.close()
+        
+        res = res.split('\n')
+        res = res[1:-2]
+        dict = {}
+        for i in res:
+            i = i.split('=')
+            dict[i[0]] = i[1]
+        
+        dict['MaxFileSize'] = int(dict['MaxFileSize'], 16)
+        dict['Protocol'] = float(dict['Protocol'])
+        if dict.has_key('HighestSeenBuild'):
+            dict['HighestSeenBuild'] = int(dict['HighestSeenBuild'])
+        
+        return dict
+
+    def info(self):
+        "Get a dictionary full of useful info with a ClientInfo message."
+        self._sock = FCPSocket(self.host, self.port)
+        self._sock.send('ClientInfo\nEndMessage\n');
+        res = self._sock.readline()
+        self._checkerror(res, self._sock)
+        res += self._sock.recvall(1024)
+        self._sock.close()
+        
+        res = res.split('\n')
+        res = res[1:-2]
+        dict = {}
+        for i in res:
+            i = i.split('=')
+            dict[i[0]] = i[1]
+
+        hexvals = ['Processors', 'AllocatedMemory', 'MaximumMemory', \
+                   'FreeMemory', 'DatastoreMax', 'DatastoreUsed', \
+                   'DatastoreFree', 'MaxFileSize', 'MostRecentTimestamp', \
+                   'LeastRecentTimestamp', 'AvailableThreads', 'ActiveJobs', \
+                   'RoutingTime', 'EstimatedLoad', 'EstimateRateLimitingLoad', 
\
+                   'NodePort']
+        for k in hexvals:
+            dict[k] = int(dict[k], 16)
+
+        if dict['IsTransient'] == 'true':
+            dict['IsTransient'] = True
+        else:
+            dict['IsTransient'] = False
+        
+        return dict
+
+    def get(self, key, htl, removelocal=False):
+        """Get a piece of data from Freenet.
+
+           Call with: get(key[, htl][, removelocal])
+               key - a Freenet key, such as KSK at gpl.txt
+               htl - Hops To Live (default: 10)
+               removelocal - boolean
+
+           Key may start with 'freenet:', but it doesn't have to.
+
+           Returns a tuple; the first part is metadata, the second data.
+           Both are strings, and either may be empty.
+        """
+        if removelocal:
+            removelocal = 'true'
+        else:
+            removelocal = 'false'
+        if not key.startswith('freenet:'):
+            key = 'freenet:' + key
+        s = 
'ClientGet\nRemoveLocalKey=%s\nURI=%s\nHopsToLive=%s\nEndMessage\n' \
+            % (removelocal, key, str(htl))
+        self._sock = FCPSocket(self.host, self.port)
+        self._sock.send(s)
+
+        try:
+            self._setstatus('Active', key.lstrip('freenet:'))
+            # Eat any restarts at the beginning.
+            while True:
+                line = self._sock.readline()
+                if line != 'Restarted\n':
+                    break
+                self._setstatus('Active', 'Restart')
+                while True:
+                    line = self._sock.readline()
+                    if line == 'EndMessage\n':
+                        break
+                    line = line.split('=')
+                    if line[0] == 'Timeout':
+                        self._sock.settimeout(int(line[1], 16) / 1000.0)
+
+            if self._done:
+                return (None, None)
+
+            # Check for DNF, RNF, etc.
+            self._checkerror(line, self._sock)
+
+            # Fill up the info dict.
+            info = {}
+            while True:
+                line = self._sock.readline().strip()
+                if line != 'EndMessage':
+                    line = line.split('=')
+                    info[line[0]] = int(line[1], 16)
+                else:
+                    break
+
+            while not self._done:
+                restart = False
+                self._setstatus('Transferring', info['DataLength'])
+
+                bytesrcvd = 0
+                metarcvd = 0
+                metadata = ''
+                data = ''
+                # get the metadata and the first subsequent data chunk
+                if info.has_key('MetadataLength'):
+                    while info['MetadataLength'] > metarcvd:
+                        line = self._sock.readline()   # the 'DataChunk' line
+                        if line is None:
+                            break
+                        if line == 'Restarted\n':   # start the transfer over
+                            self._setstatus('Stopped')
+                            self._setstatus('Active', key)
+                            restart = True
+                            while True:
+                                line = self._sock.readline()
+                                if line is None:
+                                    break
+                                if line == 'EndMessage\n':
+                                    break
+                                line = line.split('=')
+                                if line[0] == 'Timeout':
+                                    self._sock.settimeout(int(line[1], 16) / 
1000.0)
+                            break
+                        # Check for DNF, RNF, etc.
+                        self._checkerror(line, self._sock)
+                        line = self._sock.readline()
+                        if line is None:
+                            break
+                        chunklength = int(((line.strip()).split('='))[1], 16)
+                        self._sock.readline()   # the 'Data' line
+                        metaleft = info['MetadataLength'] - metarcvd
+                        if metaleft <= chunklength:
+                            metadata += self._sock.recvbytes(metaleft)
+                            metarcvd += metaleft
+                            newdata = self._sock.recvbytes(chunklength - 
metaleft)
+                            data += newdata
+                        else:
+                            metadata += self._sock.recvbytes(chunklength)
+                            metarcvd += chunklength
+                        bytesrcvd += chunklength
+                        self._setstatus('Transferring', bytesrcvd)
+                if restart:
+                    continue
+
+                # get the rest of the data chunks
+                while info['DataLength'] > bytesrcvd and not self._done:
+                    line = self._sock.readline()      # the 'DataChunk' line
+                    if line is None:
+                        break
+                    if line == 'Restarted\n':   # start the transfer over
+                        self._setstatus('Stopped')
+                        self._setstatus('Active', key)
+                        restart = True
+                        while True:
+                            line = self._sock.readline()
+                            if line is None:
+                                break
+                            if line == 'EndMessage\n':
+                                break
+                            line = line.split('=')
+                            if line[0] == 'Timeout':
+                                self._sock.settimeout(int(line[1], 16) / 
1000.0)
+                        break
+                    self._checkerror(line, self._sock)
+                    line = self._sock.readline()    # the 'Length' line
+                    if line is None:
+                        break
+                    length = int(((line.strip()).split('='))[1], 16)
+                    self._sock.readline()           # the 'Data' line
+                    newdata = self._sock.recvbytes(length)
+                    data += newdata
+                    bytesrcvd += length
+                    self._setstatus('Transferring', bytesrcvd)
+
+                if restart:
+                    continue
+                else:
+                    break
+
+            self._sock.close()
+            if self._done:
+                return (None, None)
+            self._setstatus('Stopped')
+            return (metadata, data)
+        except TimeoutError:
+            self._sock.close()
+            self._setstatus('Stopped')
+            raise
+
+    def put(self, data, htl, metadata='', removelocal=True, URI='CHK@'):
+        """Insert a piece of data into Freenet.
+        """
+        if not URI.startswith('freenet:'):
+            URI = 'freenet:' + URI
+        if removelocal:
+            removelocal = 'true'
+        else:
+            removelocal = 'false'
+
+        metadatalen = len(metadata)
+        datalen = len(data) + metadatalen
+
+        s = 
"ClientPut\nRemoveLocalKey=%s\nHopsToLive=%s\nURI=%s\nDataLength=%s\nMetadataLength=%s\nData\n%s"
 % \
+            (removelocal, hex(htl)[2:], URI, hex(datalen)[2:], 
hex(metadatalen)[2:], metadata + data)
+
+        self._sock = FCPSocket(self.host, self.port)
+        try:
+            self._sock.sendall(s)
+        except TimeoutError:
+            raise TimeoutError
+
+        try:
+            self._setstatus('Active', URI.lstrip('freenet:'))
+            # Eat any restarts at the beginning.
+            while not self._done:
+                line = self._sock.readline()
+                if line is None:
+                    break
+                if line != 'Restarted\n':
+                    break
+                self._setstatus('Active', 'Restart')
+                while True:
+                    line = self._sock.readline()
+                    if line == 'EndMessage\n':
+                        break
+                    line = line.split('=')
+                    if line[0] == 'Timeout':
+                        self._sock.settimeout(int(line[1], 16) / 1000.0)
+
+            self._setstatus('Transferring', 'Put')
+            res = {}
+            while not self._done:
+                self._checkerror(line, self._sock)
+                if line == 'Pending\n':
+                    self._setstatus('Transferring', 'Pending')
+                    while True:
+                        line = self._sock.readline()
+                        if line == 'EndMessage\n':
+                            break
+                        line = line.split('=')
+                        if line[0] == 'Timeout':
+                            self._sock.settimeout(int(line[1], 16) / 1000.0)
+                elif line == 'Success\n':
+                    break
+                elif line == 'Restarted\n':
+                    while True:
+                        line = self._sock.readline()
+                        if line is None:
+                            break
+                        if line == 'EndMessage\n':
+                            break
+                        line = line.split('=')
+                        if line[0] == 'Timeout':
+                            self._sock.settimeout(int(line[1], 16) / 1000.0)
+                line = self._sock.readline()
+            while not self._done:
+                line = self._sock.readline()
+                if line == 'EndMessage\n':
+                    break
+                t = line.strip().split('=')
+                res[t[0]] = t[1]
+                if t[0] == 'URI':
+                    res['URI'] = t[1].lstrip('freenet:')
+
+            self._sock.close()
+            self._setstatus('Stopped')
+            return res
+        except TimeoutError:
+            self._sock.close()
+            self._setstatus('Stopped')
+            raise
+
+    def genCHK(self, data='', metadata=''):
+        """Generate a CHk for a piece of data.
+
+           data and metadata should be strings.
+           Returns a string of the form: CHK@<string>
+        """
+        metalen = len(metadata)
+        datalen = len(data) + metalen
+        s = 'GenerateCHK\nDataLength=%s\nMetadataLength=%s\nData\n%s' \
+            % (hex(datalen)[2:], hex(metalen)[2:], metadata + data)
+        while True:
+            try:
+                self._sock = FCPSocket(self.host, self.port)
+                self._sock.sendall(s)
+                line = self._sock.readline()
+                self._checkerror(line, self._sock)
+                line = self._sock.readline()
+                self._sock.readline() #'EndMessage'
+                self._sock.close()
+                break
+            except TimeoutError:
+                continue
+        res = line.strip().lstrip('URI=freenet:')
+        return res
+
+    def genSVK(self):
+        """Generate a SVK keypair.
+
+           Returns a dicionary with keys
+           "PublicKey", "PrivateKey", and "CryptoKey"
+        """
+        s = 'GenerateSVKPair\nEndMessage\n'
+        self._sock = FCPSocket(self.host, self.port)
+        self._sock.send(s)
+        line = self._sock.readline()
+        self._checkerror(line, self._sock)
+        res = {}
+        while True:
+            line = self._sock.readline()
+            if line == 'EndMessage\n':
+                break
+            t = line.strip().split('=')
+            res[t[0]] = t[1]
+        self._sock.close()
+        return res
+
+    def rawget(self, key, removelocal=False, htl=10):
+        """Get a key from Freenet, without messing with metadata, chunks, or 
anything.
+        
+           Meant for debugging; don't use this, as it will probably disappear.
+        """
+        if removelocal:
+            removelocal = 'true'
+        else:
+            removelocal = 'false'
+        s = 
'ClientGet\nRemoveLocalKey=%s\nURI=%s\nHopsToLive=%s\nEndMessage\n' \
+            % (removelocal, 'freenet:' + key, str(htl))
+        self._sock = FCPSocket(self.host, self.port)
+        self._sock.send(s)
+        res = ''
+        while 1:
+            newdata = self._sock.recvall(16384)
+            res += newdata
+            if not newdata:
+                break    
+        self._sock.close()
+        return res
+
+    # Broken-ass FEC stuff follows
+#    def FECSegmentFile(self, algoname, filelength):
+#        s = 'FECSegmentFile\nAlgoName=%s\nFileLength=%s\nEndMessage\n' % \
+#            (algoname, hex(filelength)[2:])
+#        self._sock = FCPSocket(self.host, self.port)
+#        self._sock.send(s)
+#        line = self._sock.readline()
+#        self._checkerror(line, self._sock)
+#        res = []
+#        while True:
+#            if line == 'SegmentHeader\n':
+#                seghdr = {}
+#            elif line == 'EndMessage\n':
+#                res.append(seghdr)
+#                if seghdr['SegmentNum'] == seghdr['Segments'] - 1:
+#                    break
+#            elif line.startswith('FECAlgorithm'):
+#                seghdr['FECAlgorithm'] = line.strip().split('=')[1]
+#            else:
+#                line = line.strip().split('=')
+#                seghdr[line[0]] = int(line[1], 16)
+#            line = self._sock.readline()
+#        return res
+
+#    def FECEncodeSegment(self, data, segmentheader):
+#        """This shit doesn't work.
+#           I dont' know if it's my problem or fred's problem, but don't use 
this.
+#        """
+#        shstring = 'SegmentHeader\n'
+#        for k in segmentheader.keys():
+#            shstring += k + '='
+#            if type(segmentheader[k]) is int:
+#                shstring += hex(segmentheader[k])[2:] + '\n'
+#            else:
+#                shstring += segmentheader[k] + '\n'
+#        shstring += 'EndMessage\n'
+#        s = 'FECEncodeSegment\nDataLength=%s\nData\n%s%s' % \
+#            (hex(len(shstring) + len(data))[2:], shstring, data)
+#        print len(s)
+#        print s[:1000]
+#        self._sock = FCPSocket(self.host, self.port)
+#        sock.sendall(s)
+#        while 1:
+#            newdata = sock.recv(16384)
+#            res += newdata
+#            if not newdata:
+#                break    
+#        sock.close()
+#        return res
+
+    # Check for errors in the node response.
+    # The first line recieved from the node should be passed in, along with the
+    # connected socket.  Bombs out with a properly packaged exception if there
+    # is an error, acts as a noop otherwise.
+    def _checkerror(self, res, sock):
+        err = res.strip()
+        if err == 'FormatError':
+            self._setstatus('Stopped')
+            e = FormatError()
+            line = self._sock.readline().strip()
+            e.Reason = (line.split('='))[1]
+            self._sock.close()
+            raise e
+        if err == 'Failed':
+            self._setstatus('Stopped')
+            e = FailedError()
+            line = self._sock.readline().strip()
+            e.Reason = (line.split('='))[1]
+            self._sock.close()
+            raise e            
+        if err == 'URIError':
+            self._setstatus('Stopped')
+            self._sock.close()
+            raise URIError
+        if err == 'DataNotFound':
+            self._setstatus('Stopped')
+            self._sock.close()
+            raise DNFError
+        if err == 'RouteNotFound':
+            self._setstatus('Stopped')
+            e = RNFError()
+            e.Unreachable = 0
+            e.Restarted = 0
+            e.Rejected = 0
+            while True:
+                line = self._sock.readline().strip()
+                if line == 'EndMessage':
+                    break
+                else:
+                    line = line.split('=')
+                    if line[0] == 'Unreachable':
+                        e.Unreachable = int(line[1], 16)
+                    elif line[0] == 'Restarted':
+                        e.Restarted = int(line[1], 16)
+                    elif line[0] == 'Rejected':
+                        e.Rejected = int(line[1], 16)
+            self._sock.close()
+            raise e
+        if err == 'KeyCollision':
+            self._setstatus('Stopped')
+            line = self._sock.readline().strip()
+            URI = (line.split('='))[1].lstrip('freenet:')
+            self._sock.close()
+            raise KeyCollisionError, URI
+        if err == 'SizeError':
+            self._setstatus('Stopped')
+            self._sock.close()
+            raise SizeError
+        
+    def getstatus(self):
+        """Get the status of the connection.
+
+           The status is a dict:
+             'State' is 'Stopped', 'Active', or 'Transferring'
+             'Key' is the key being transferred
+             'Restarts' is a number representing the number of times the 
connection restarted
+             'DataSize' is the total size of the data being transferred
+             'DataTransferred' is the amount of data that has been transferred
+             'Pending' is the number of times we've recieved a pending message 
on an insert
+        """
+        self._statlock.acquire()
+        s = copy.copy(self._status)
+        self._statlock.release()
+        return s
+
+    def _setstatus(self, state, ext=''):
+        """Don't call this yourself!"""
+        self._statlock.acquire()
+        if state == 'Stopped':
+            self._status = {}
+            self._status['State'] = 'Stopped'
+        elif state == 'Active':
+            if self._status['State'] == 'Stopped':
+                self._status['State'] = 'Active'
+                self._status['Key'] = ext
+                self._status['Restarts'] = 0
+            elif self._status['State'] == 'Active':
+                if ext == 'Restart':
+                    self._status['Restarts'] += 1
+                else: print 'Unexpected %s' % ext[0]
+            elif self._status['State'] == 'Transferring':
+                self._status['State'] = 'Active'
+                self._status['Restarts'] += 1
+            else:
+                print "Crap! An error in FCP setstatus!"
+                print "State: %s, Status: %s" % (state, self._status[0])
+        elif state == 'Transferring':
+            if self._status['State'] == 'Active':
+                self._status['State'] = 'Transferring'
+                self._status['DataTransferred'] = 0
+                if ext == 'Put':
+                    self._status['Pending'] = 0
+                else:
+                    self._status['DataSize'] = ext
+            elif self._status['State'] == 'Transferring':
+                if ext == 'Pending':
+                    self._status['Pending'] += 1
+                else:
+                    self._status['DataTransferred'] = ext   # data recieved
+            else:
+                print "Crap! An error in FCP setstatus!"
+                print "State: %s, Status: %s" % (state, self._status[0])
+        else:
+            print "Crap! Invalid state:", state
+        self._statlock.release()
+        if self._statevent is not None:
+            self._statevent.set()
+
+    def kill(self):
+        self._sock.kill()
+        self._done = True
+        
+
+class Error(Exception):
+    """Base class for errors in the FCP module."""
+    pass
+
+class ConnectError(Error):
+    """Unable to connect to FCP socket.
+
+       args is the (host, port) tuple that failed.
+    """
+    pass
+
+class FormatError(Error):
+    """Error in message format."""
+    pass
+
+class FailedError(Error):
+    """Error in the node itself."""
+    pass
+
+class URIError(Error):
+    pass
+
+class DNFError(Error):
+    pass
+
+class TimeoutError(Error):
+    pass
+
+class KeyCollisionError(Error):
+    pass
+
+class SizeError(Error):
+    pass
+
+class RNFError(Error):
+    """Route Not Found.
+
+       Atributes are Unreachable, Restarted, and Rejected.
+    """
+    pass
+
+class FCPSocket(socket.socket):
+    """FCP Socket: does some magic FCP stuff, as well as buffering and
+    timeouts.  You probably shouldn't use this for anything other than YAFI's
+    FCP code, because it's kind of quirky.
+
+    This socket has no recv() method; use readline(), recvbytes(), or recvall()
+    instead.
+    """
+
+    _magicstring = '\x00\x00\x00\x02'
+    _chunksize = 0x6000
+
+    def __init__(self, host, port):
+        socket.socket.__init__(self, socket.AF_INET, socket.SOCK_STREAM)
+
+        socket.socket.settimeout(self, 1)
+        self._timeout = 300
+        self._done = False
+
+        self._buf = ''
+        self._recv = self.recv
+        del self.recv
+
+        try:
+            self.connect((host, port))
+        except:
+            raise ConnectError(self.host, self.port)
+        self.send(self._magicstring)
+
+    def readline(self):
+        """Read until you get a newline.
+           Returns None if the connection closes on the other end.
+
+           The newline is returned at the end of the string.
+        """
+        starttime = time.time()
+        res = ''
+        newdata = ''
+        while not self._done:
+            i = self._buf.find('\n')
+            if i != -1:
+                res = self._buf[:i+1]
+                self._buf = self._buf[i+1:]
+                return res
+            try:
+                newdata = self._recv(self._chunksize)
+            except socket.timeout:
+                if self._timeout is not None and (time.time() - starttime) >= 
self._timeout:
+                    raise TimeoutError
+            else:
+                if newdata == '':
+                    raise TimeoutError
+            self._buf += newdata
+        return None
+
+    def recvbytes(self, size):
+        """Read until you get size bytes.
+
+            Be damn sure the other end is really going to send this much,
+            or this will hang.
+        """
+        starttime = time.time()
+        res = ''
+        newdata = ''
+        while not self._done:
+            l = len(self._buf)
+            if l >= size:
+                res = self._buf[:size]
+                self._buf = self._buf[size:]
+                return res
+            try:
+                newdata = self._recv(self._chunksize)
+            except socket.timeout:
+                if self._timeout is not None and (time.time() - starttime) >= 
self._timeout:
+                    raise TimeoutError
+            else:
+                if newdata == '':
+                    raise TimeoutError
+            self._buf += newdata
+        return None
+
+    def recvall(self, size):
+        """Read until the connection is closed remotely.
+        """
+        starttime = time.time()
+        newdata = ''
+        while not self._done:
+            try:
+                newdata = self._recv(self._chunksize)
+            except socket.timeout:
+                if self._timeout is not None and (time.time() - starttime) >= 
self._timeout:
+                    raise TimeoutError
+            if newdata == '':
+                res = self._buf
+                self._buf = ''
+                return res
+            else:
+                self._buf += newdata
+
+    def sendall(self, data):
+        socket.socket.settimeout(self, None)
+        socket.socket.sendall(self, data)
+        socket.socket.settimeout(self, 1)
+
+    def settimeout(self, secs):
+        self._timeout = secs
+
+    def kill(self):
+        self._done = True
+
+def debugwrite(s):
+    sys.stderr.write("%s:\t%s\n" % (threading.currentThread().getName(), s))
+    sys.stderr.flush()
+

Added: trunk/apps/yafi/fec/Makefile
===================================================================
--- trunk/apps/yafi/fec/Makefile        2006-01-18 20:12:22 UTC (rev 7877)
+++ trunk/apps/yafi/fec/Makefile        2006-01-18 20:14:19 UTC (rev 7878)
@@ -0,0 +1,25 @@
+CC=gcc
+
+CFLAGS=-fno-strict-aliasing -DNDEBUG -fPIC
+PYINCLUDE=-I/usr/include/python2.3
+
+all: onionfec8.so onionfec16.so
+
+onionfec8.so: fec8.o onionfecmodule.o
+       $(CC) -pthread -shared -o $@ $^
+
+onionfec16.so: fec16.o onionfecmodule.o
+       $(CC) -pthread -shared -o $@ $^
+
+fec8.o: fec.c fec.h
+       $(CC) $(CFLAGS) -DDGF_BITS=8 -c -o $@ $<
+
+fec16.o: fec.c fec.h
+       $(CC) $(CFLAGS) -DDGF_BITS=16 -c -o $@ $<
+
+onionfecmodule.o: onionfecmodule.c
+       $(CC) $(CFLAGS) $(PYINCLUDE) -c -o $@ $<
+
+clean:
+       rm -f *.o *.so
+

Added: trunk/apps/yafi/fec/fec.c
===================================================================
--- trunk/apps/yafi/fec/fec.c   2006-01-18 20:12:22 UTC (rev 7877)
+++ trunk/apps/yafi/fec/fec.c   2006-01-18 20:14:19 UTC (rev 7878)
@@ -0,0 +1,909 @@
+/*
+ * fec.c -- forward error correction based on Vandermonde matrices
+ * 980624
+ * (C) 1997-98 Luigi Rizzo (luigi at iet.unipi.it)
+ *
+ * Portions derived from code by Phil Karn (karn at ka9q.ampr.org),
+ * Robert Morelos-Zaragoza (robert at spectra.eng.hawaii.edu) and Hari
+ * Thirumoorthy (harit at spectra.eng.hawaii.edu), Aug 1995
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above
+ *    copyright notice, this list of conditions and the following
+ *    disclaimer in the documentation and/or other materials
+ *    provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+ * PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY,
+ * OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA,
+ * OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
+ * TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
+ * OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY
+ * OF SUCH DAMAGE.
+ */
+
+/*
+ * The following parameter defines how many bits are used for
+ * field elements. The code supports any value from 2 to 16
+ * but fastest operation is achieved with 8 bit elements
+ * This is the only parameter you may want to change.
+ */
+#ifndef GF_BITS
+#define GF_BITS  8     /* code over GF(2**GF_BITS) - change to suit */
+#endif
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+
+/*
+ * compatibility stuff
+ */
+#ifdef MSDOS   /* but also for others, e.g. sun... */
+#define NEED_BCOPY
+#define bcmp(a,b,n) memcmp(a,b,n)
+#endif
+
+#ifdef NEED_BCOPY
+#define bcopy(s, d, siz)        memcpy((d), (s), (siz))
+#define bzero(d, siz)   memset((d), '\0', (siz))
+#endif
+
+/*
+ * stuff used for testing purposes only
+ */
+
+#ifdef TEST
+#define DEB(x)
+#define DDB(x) x
+#define        DEBUG   0       /* minimal debugging */
+#ifdef MSDOS
+#include <time.h>
+struct timeval {
+    unsigned long ticks;
+};
+#define gettimeofday(x, dummy) { (x)->ticks = clock() ; }
+#define DIFF_T(a,b) (1+ 1000000*(a.ticks - b.ticks) / CLOCKS_PER_SEC )
+typedef unsigned long u_long ;
+typedef unsigned short u_short ;
+#else /* typically, unix systems */
+#include <sys/time.h>
+#define DIFF_T(a,b) \
+       (1+ 1000000*(a.tv_sec - b.tv_sec) + (a.tv_usec - b.tv_usec) )
+#endif
+
+#define TICK(t) \
+       {struct timeval x ; \
+       gettimeofday(&x, NULL) ; \
+       t = x.tv_usec + 1000000* (x.tv_sec & 0xff ) ; \
+       }
+#define TOCK(t) \
+       { u_long t1 ; TICK(t1) ; \
+         if (t1 < t) t = 256000000 + t1 - t ; \
+         else t = t1 - t ; \
+         if (t == 0) t = 1 ;}
+       
+u_long ticks[10];      /* vars for timekeeping */
+#else
+#define DEB(x)
+#define DDB(x)
+#define TICK(x)
+#define TOCK(x)
+#endif /* TEST */
+
+/*
+ * You should not need to change anything beyond this point.
+ * The first part of the file implements linear algebra in GF.
+ *
+ * gf is the type used to store an element of the Galois Field.
+ * Must constain at least GF_BITS bits.
+ *
+ * Note: unsigned char will work up to GF(256) but int seems to run
+ * faster on the Pentium. We use int whenever have to deal with an
+ * index, since they are generally faster.
+ */
+#if (GF_BITS < 2  && GF_BITS >16)
+#error "GF_BITS must be 2 .. 16"
+#endif
+#if (GF_BITS <= 8)
+typedef unsigned char gf;
+#else
+typedef unsigned short gf;
+#endif
+
+#define        GF_SIZE ((1 << GF_BITS) - 1)    /* powers of \alpha */
+
+/*
+ * Primitive polynomials - see Lin & Costello, Appendix A,
+ * and  Lee & Messerschmitt, p. 453.
+ */
+static char *allPp[] = {    /* GF_BITS polynomial              */
+    NULL,                  /*  0       no code                 */
+    NULL,                  /*  1       no code                 */
+    "111",                 /*  2       1+x+x^2                 */
+    "1101",                /*  3       1+x+x^3                 */
+    "11001",               /*  4       1+x+x^4                 */
+    "101001",              /*  5       1+x^2+x^5               */
+    "1100001",             /*  6       1+x+x^6                 */
+    "10010001",                    /*  7       1 + x^3 + x^7           */
+    "101110001",           /*  8       1+x^2+x^3+x^4+x^8       */
+    "1000100001",          /*  9       1+x^4+x^9               */
+    "10010000001",         /* 10       1+x^3+x^10              */
+    "101000000001",        /* 11       1+x^2+x^11              */
+    "1100101000001",       /* 12       1+x+x^4+x^6+x^12        */
+    "11011000000001",      /* 13       1+x+x^3+x^4+x^13        */
+    "110000100010001",     /* 14       1+x+x^6+x^10+x^14       */
+    "1100000000000001",            /* 15       1+x+x^15                */
+    "11010000000010001"            /* 16       1+x+x^3+x^12+x^16       */
+};
+
+
+/*
+ * To speed up computations, we have tables for logarithm, exponent
+ * and inverse of a number. If GF_BITS <= 8, we use a table for
+ * multiplication as well (it takes 64K, no big deal even on a PDA,
+ * especially because it can be pre-initialized an put into a ROM!),
+ * otherwhise we use a table of logarithms.
+ * In any case the macro gf_mul(x,y) takes care of multiplications.
+ */
+
+static gf gf_exp[2*GF_SIZE];   /* index->poly form conversion table    */
+static int gf_log[GF_SIZE + 1];        /* Poly->index form conversion table    
*/
+static gf inverse[GF_SIZE+1];  /* inverse of field elem.               */
+                               /* inv[\alpha**i]=\alpha**(GF_SIZE-i-1) */
+
+/*
+ * modnn(x) computes x % GF_SIZE, where GF_SIZE is 2**GF_BITS - 1,
+ * without a slow divide.
+ */
+static inline gf
+modnn(int x)
+{
+    while (x >= GF_SIZE) {
+       x -= GF_SIZE;
+       x = (x >> GF_BITS) + (x & GF_SIZE);
+    }
+    return x;
+}
+
+#define SWAP(a,b,t) {t tmp; tmp=a; a=b; b=tmp;}
+
+/*
+ * gf_mul(x,y) multiplies two numbers. If GF_BITS<=8, it is much
+ * faster to use a multiplication table.
+ *
+ * USE_GF_MULC, GF_MULC0(c) and GF_ADDMULC(x) can be used when multiplying
+ * many numbers by the same constant. In this case the first
+ * call sets the constant, and others perform the multiplications.
+ * A value related to the multiplication is held in a local variable
+ * declared with USE_GF_MULC . See usage in addmul1().
+ */
+#if (GF_BITS <= 8)
+static gf gf_mul_table[GF_SIZE + 1][GF_SIZE + 1];
+
+#define gf_mul(x,y) gf_mul_table[x][y]
+
+#define USE_GF_MULC register gf * __gf_mulc_
+#define GF_MULC0(c) __gf_mulc_ = gf_mul_table[c]
+#define GF_ADDMULC(dst, x) dst ^= __gf_mulc_[x]
+
+static void
+init_mul_table()
+{
+    int i, j;
+    for (i=0; i< GF_SIZE+1; i++)
+       for (j=0; j< GF_SIZE+1; j++)
+           gf_mul_table[i][j] = gf_exp[modnn(gf_log[i] + gf_log[j]) ] ;
+
+    for (j=0; j< GF_SIZE+1; j++)
+           gf_mul_table[0][j] = gf_mul_table[j][0] = 0;
+}
+#else  /* GF_BITS > 8 */
+static inline gf
+gf_mul(x,y)
+{
+    if ( (x) == 0 || (y)==0 ) return 0;
+     
+    return gf_exp[gf_log[x] + gf_log[y] ] ;
+}
+#define init_mul_table()
+
+#define USE_GF_MULC register gf * __gf_mulc_
+#define GF_MULC0(c) __gf_mulc_ = &gf_exp[ gf_log[c] ]
+#define GF_ADDMULC(dst, x) { if (x) dst ^= __gf_mulc_[ gf_log[x] ] ; }
+#endif
+
+/*
+ * Generate GF(2**m) from the irreducible polynomial p(X) in p[0]..p[m]
+ * Lookup tables:
+ *     index->polynomial form          gf_exp[] contains j= \alpha^i;
+ *     polynomial form -> index form   gf_log[ j = \alpha^i ] = i
+ * \alpha=x is the primitive element of GF(2^m)
+ *
+ * For efficiency, gf_exp[] has size 2*GF_SIZE, so that a simple
+ * multiplication of two numbers can be resolved without calling modnn
+ */
+
+/*
+ * i use malloc so many times, it is easier to put checks all in
+ * one place.
+ */
+static void *
+my_malloc(int sz, char *err_string)
+{
+    void *p = malloc( sz );
+    if (p == NULL) {
+       fprintf(stderr, "-- malloc failure allocating %s\n", err_string);
+       exit(1) ;
+    }
+    return p ;
+}
+
+#define NEW_GF_MATRIX(rows, cols) \
+    (gf *)my_malloc(rows * cols * sizeof(gf), " ## __LINE__ ## " )
+
+/*
+ * initialize the data structures used for computations in GF.
+ */
+static void
+generate_gf(void)
+{
+    int i;
+    gf mask;
+    char *Pp =  allPp[GF_BITS] ;
+
+    mask = 1;  /* x ** 0 = 1 */
+    gf_exp[GF_BITS] = 0; /* will be updated at the end of the 1st loop */
+    /*
+     * first, generate the (polynomial representation of) powers of \alpha,
+     * which are stored in gf_exp[i] = \alpha ** i .
+     * At the same time build gf_log[gf_exp[i]] = i .
+     * The first GF_BITS powers are simply bits shifted to the left.
+     */
+    for (i = 0; i < GF_BITS; i++, mask <<= 1 ) {
+       gf_exp[i] = mask;
+       gf_log[gf_exp[i]] = i;
+       /*
+        * If Pp[i] == 1 then \alpha ** i occurs in poly-repr
+        * gf_exp[GF_BITS] = \alpha ** GF_BITS
+        */
+       if ( Pp[i] == '1' )
+           gf_exp[GF_BITS] ^= mask;
+    }
+    /*
+     * now gf_exp[GF_BITS] = \alpha ** GF_BITS is complete, so can als
+     * compute its inverse.
+     */
+    gf_log[gf_exp[GF_BITS]] = GF_BITS;
+    /*
+     * Poly-repr of \alpha ** (i+1) is given by poly-repr of
+     * \alpha ** i shifted left one-bit and accounting for any
+     * \alpha ** GF_BITS term that may occur when poly-repr of
+     * \alpha ** i is shifted.
+     */
+    mask = 1 << (GF_BITS - 1 ) ;
+    for (i = GF_BITS + 1; i < GF_SIZE; i++) {
+       if (gf_exp[i - 1] >= mask)
+           gf_exp[i] = gf_exp[GF_BITS] ^ ((gf_exp[i - 1] ^ mask) << 1);
+       else
+           gf_exp[i] = gf_exp[i - 1] << 1;
+       gf_log[gf_exp[i]] = i;
+    }
+    /*
+     * log(0) is not defined, so use a special value
+     */
+    gf_log[0] =        GF_SIZE ;
+    /* set the extended gf_exp values for fast multiply */
+    for (i = 0 ; i < GF_SIZE ; i++)
+       gf_exp[i + GF_SIZE] = gf_exp[i] ;
+
+    /*
+     * again special cases. 0 has no inverse. This used to
+     * be initialized to GF_SIZE, but it should make no difference
+     * since noone is supposed to read from here.
+     */
+    inverse[0] = 0 ;
+    inverse[1] = 1;
+    for (i=2; i<=GF_SIZE; i++)
+       inverse[i] = gf_exp[GF_SIZE-gf_log[i]];
+}
+
+/*
+ * Various linear algebra operations that i use often.
+ */
+
+/*
+ * addmul() computes dst[] = dst[] + c * src[]
+ * This is used often, so better optimize it! Currently the loop is
+ * unrolled 16 times, a good value for 486 and pentium-class machines.
+ * The case c=0 is also optimized, whereas c=1 is not. These
+ * calls are unfrequent in my typical apps so I did not bother.
+ * 
+ * Note that gcc on
+ */
+#define addmul(dst, src, c, sz) \
+    if (c != 0) addmul1(dst, src, c, sz)
+
+#define UNROLL 16 /* 1, 4, 8, 16 */
+static void
+addmul1(gf *dst1, gf *src1, gf c, int sz)
+{
+    USE_GF_MULC ;
+    register gf *dst = dst1, *src = src1 ;
+    gf *lim = &dst[sz - UNROLL + 1] ;
+
+    GF_MULC0(c) ;
+
+#if (UNROLL > 1) /* unrolling by 8/16 is quite effective on the pentium */
+    for (; dst < lim ; dst += UNROLL, src += UNROLL ) {
+       GF_ADDMULC( dst[0] , src[0] );
+       GF_ADDMULC( dst[1] , src[1] );
+       GF_ADDMULC( dst[2] , src[2] );
+       GF_ADDMULC( dst[3] , src[3] );
+#if (UNROLL > 4)
+       GF_ADDMULC( dst[4] , src[4] );
+       GF_ADDMULC( dst[5] , src[5] );
+       GF_ADDMULC( dst[6] , src[6] );
+       GF_ADDMULC( dst[7] , src[7] );
+#endif
+#if (UNROLL > 8)
+       GF_ADDMULC( dst[8] , src[8] );
+       GF_ADDMULC( dst[9] , src[9] );
+       GF_ADDMULC( dst[10] , src[10] );
+       GF_ADDMULC( dst[11] , src[11] );
+       GF_ADDMULC( dst[12] , src[12] );
+       GF_ADDMULC( dst[13] , src[13] );
+       GF_ADDMULC( dst[14] , src[14] );
+       GF_ADDMULC( dst[15] , src[15] );
+#endif
+    }
+#endif
+    lim += UNROLL - 1 ;
+    for (; dst < lim; dst++, src++ )           /* final components */
+       GF_ADDMULC( *dst , *src );
+}
+
+/*
+ * computes C = AB where A is n*k, B is k*m, C is n*m
+ */
+static void
+matmul(gf *a, gf *b, gf *c, int n, int k, int m)
+{
+    int row, col, i ;
+
+    for (row = 0; row < n ; row++) {
+       for (col = 0; col < m ; col++) {
+           gf *pa = &a[ row * k ];
+           gf *pb = &b[ col ];
+           gf acc = 0 ;
+           for (i = 0; i < k ; i++, pa++, pb += m )
+               acc ^= gf_mul( *pa, *pb ) ;
+           c[ row * m + col ] = acc ;
+       }
+    }
+}
+
+#ifdef DEBUG
+/*
+ * returns 1 if the square matrix is identiy
+ * (only for test)
+ */
+static int
+is_identity(gf *m, int k)
+{
+    int row, col ;
+    for (row=0; row<k; row++)
+       for (col=0; col<k; col++)
+           if ( (row==col && *m != 1) ||
+                (row!=col && *m != 0) )
+                return 0 ;
+           else
+               m++ ;
+    return 1 ;
+}
+#endif /* debug */
+
+/*
+ * invert_mat() takes a matrix and produces its inverse
+ * k is the size of the matrix.
+ * (Gauss-Jordan, adapted from Numerical Recipes in C)
+ * Return non-zero if singular.
+ */
+DEB( int pivloops=0; int pivswaps=0 ; /* diagnostic */)
+static int
+invert_mat(gf *src, int k)
+{
+    gf c, *p ;
+    int irow, icol, row, col, i, ix ;
+
+    int error = 1 ;
+    int *indxc = my_malloc(k*sizeof(int), "indxc");
+    int *indxr = my_malloc(k*sizeof(int), "indxr");
+    int *ipiv = my_malloc(k*sizeof(int), "ipiv");
+    gf *id_row = NEW_GF_MATRIX(1, k);
+    gf *temp_row = NEW_GF_MATRIX(1, k);
+
+    bzero(id_row, k*sizeof(gf));
+    DEB( pivloops=0; pivswaps=0 ; /* diagnostic */ )
+    /*
+     * ipiv marks elements already used as pivots.
+     */
+    for (i = 0; i < k ; i++)
+       ipiv[i] = 0 ;
+
+    for (col = 0; col < k ; col++) {
+       gf *pivot_row ;
+       /*
+        * Zeroing column 'col', look for a non-zero element.
+        * First try on the diagonal, if it fails, look elsewhere.
+        */
+       irow = icol = -1 ;
+       if (ipiv[col] != 1 && src[col*k + col] != 0) {
+           irow = col ;
+           icol = col ;
+           goto found_piv ;
+       }
+       for (row = 0 ; row < k ; row++) {
+           if (ipiv[row] != 1) {
+               for (ix = 0 ; ix < k ; ix++) {
+                   DEB( pivloops++ ; )
+                   if (ipiv[ix] == 0) {
+                       if (src[row*k + ix] != 0) {
+                           irow = row ;
+                           icol = ix ;
+                           goto found_piv ;
+                       }
+                   } else if (ipiv[ix] > 1) {
+                       fprintf(stderr, "singular matrix\n");
+                       goto fail ; 
+                   }
+               }
+           }
+       }
+       if (icol == -1) {
+           fprintf(stderr, "XXX pivot not found!\n");
+           goto fail ;
+       }
+found_piv:
+       ++(ipiv[icol]) ;
+       /*
+        * swap rows irow and icol, so afterwards the diagonal
+        * element will be correct. Rarely done, not worth
+        * optimizing.
+        */
+       if (irow != icol) {
+           for (ix = 0 ; ix < k ; ix++ ) {
+               SWAP( src[irow*k + ix], src[icol*k + ix], gf) ;
+           }
+       }
+       indxr[col] = irow ;
+       indxc[col] = icol ;
+       pivot_row = &src[icol*k] ;
+       c = pivot_row[icol] ;
+       if (c == 0) {
+           fprintf(stderr, "singular matrix 2\n");
+           goto fail ;
+       }
+       if (c != 1 ) { /* otherwhise this is a NOP */
+           /*
+            * this is done often , but optimizing is not so
+            * fruitful, at least in the obvious ways (unrolling)
+            */
+           DEB( pivswaps++ ; )
+           c = inverse[ c ] ;
+           pivot_row[icol] = 1 ;
+           for (ix = 0 ; ix < k ; ix++ )
+               pivot_row[ix] = gf_mul(c, pivot_row[ix] );
+       }
+       /*
+        * from all rows, remove multiples of the selected row
+        * to zero the relevant entry (in fact, the entry is not zero
+        * because we know it must be zero).
+        * (Here, if we know that the pivot_row is the identity,
+        * we can optimize the addmul).
+        */
+       id_row[icol] = 1;
+       if (bcmp(pivot_row, id_row, k*sizeof(gf)) != 0) {
+           for (p = src, ix = 0 ; ix < k ; ix++, p += k ) {
+               if (ix != icol) {
+                   c = p[icol] ;
+                   p[icol] = 0 ;
+                   addmul(p, pivot_row, c, k );
+               }
+           }
+       }
+       id_row[icol] = 0;
+    } /* done all columns */
+    for (col = k-1 ; col >= 0 ; col-- ) {
+       if (indxr[col] <0 || indxr[col] >= k)
+           fprintf(stderr, "AARGH, indxr[col] %d\n", indxr[col]);
+       else if (indxc[col] <0 || indxc[col] >= k)
+           fprintf(stderr, "AARGH, indxc[col] %d\n", indxc[col]);
+       else
+       if (indxr[col] != indxc[col] ) {
+           for (row = 0 ; row < k ; row++ ) {
+               SWAP( src[row*k + indxr[col]], src[row*k + indxc[col]], gf) ;
+           }
+       }
+    }
+    error = 0 ;
+fail:
+    free(indxc);
+    free(indxr);
+    free(ipiv);
+    free(id_row);
+    free(temp_row);
+    return error ;
+}
+
+/*
+ * fast code for inverting a vandermonde matrix.
+ * XXX NOTE: It assumes that the matrix
+ * is not singular and _IS_ a vandermonde matrix. Only uses
+ * the second column of the matrix, containing the p_i's.
+ *
+ * Algorithm borrowed from "Numerical recipes in C" -- sec.2.8, but
+ * largely revised for my purposes.
+ * p = coefficients of the matrix (p_i)
+ * q = values of the polynomial (known)
+ */
+
+int
+invert_vdm(gf *src, int k)
+{
+    int i, j, row, col ;
+    gf *b, *c, *p;
+    gf t, xx ;
+
+    if (k == 1)        /* degenerate case, matrix must be p^0 = 1 */
+       return 0 ;
+    /*
+     * c holds the coefficient of P(x) = Prod (x - p_i), i=0..k-1
+     * b holds the coefficient for the matrix inversion
+     */
+    c = NEW_GF_MATRIX(1, k);
+    b = NEW_GF_MATRIX(1, k);
+
+    p = NEW_GF_MATRIX(1, k);
+   
+    for ( j=1, i = 0 ; i < k ; i++, j+=k ) {
+       c[i] = 0 ;
+       p[i] = src[j] ;    /* p[i] */
+    }
+    /*
+     * construct coeffs. recursively. We know c[k] = 1 (implicit)
+     * and start P_0 = x - p_0, then at each stage multiply by
+     * x - p_i generating P_i = x P_{i-1} - p_i P_{i-1}
+     * After k steps we are done.
+     */
+    c[k-1] = p[0] ;    /* really -p(0), but x = -x in GF(2^m) */
+    for (i = 1 ; i < k ; i++ ) {
+       gf p_i = p[i] ; /* see above comment */
+       for (j = k-1  - ( i - 1 ) ; j < k-1 ; j++ )
+           c[j] ^= gf_mul( p_i, c[j+1] ) ;
+       c[k-1] ^= p_i ;
+    }
+
+    for (row = 0 ; row < k ; row++ ) {
+       /*
+        * synthetic division etc.
+        */
+       xx = p[row] ;
+       t = 1 ;
+       b[k-1] = 1 ; /* this is in fact c[k] */
+       for (i = k-2 ; i >= 0 ; i-- ) {
+           b[i] = c[i+1] ^ gf_mul(xx, b[i+1]) ;
+           t = gf_mul(xx, t) ^ b[i] ;
+       }
+       for (col = 0 ; col < k ; col++ )
+           src[col*k + row] = gf_mul(inverse[t], b[col] );
+    }
+    free(c) ;
+    free(b) ;
+    free(p) ;
+    return 0 ;
+}
+
+static int fec_initialized = 0 ;
+
+static void init_fec()
+{
+    TICK(ticks[0]);
+    generate_gf();
+    TOCK(ticks[0]);
+    DDB(fprintf(stderr, "generate_gf took %ldus\n", ticks[0]);)
+    TICK(ticks[0]);
+    init_mul_table();
+    TOCK(ticks[0]);
+    DDB(fprintf(stderr, "init_mul_table took %ldus\n", ticks[0]);)
+    fec_initialized = 1 ;
+}
+
+/*
+ * This section contains the proper FEC encoding/decoding routines.
+ * The encoding matrix is computed starting with a Vandermonde matrix,
+ * and then transforming it into a systematic matrix.
+ */
+
+#define FEC_MAGIC      0xFECC0DEC
+
+struct fec_parms {
+    u_long magic ;
+    int k, n ;         /* parameters of the code */
+    gf *enc_matrix ;
+} ;
+
+void
+fec_free(struct fec_parms *p)
+{
+    if (p==NULL ||
+       p->magic != ( ( (FEC_MAGIC ^ p->k) ^ p->n) ^ (int)(p->enc_matrix)) ) {
+       fprintf(stderr, "bad parameters to fec_free\n");
+       return ;
+    }
+    free(p->enc_matrix);
+    free(p);
+}
+
+/*
+ * create a new encoder, returning a descriptor. This contains k,n and
+ * the encoding matrix.
+ */
+struct fec_parms *
+fec_new(int k, int n)
+{
+    int row, col ;
+    gf *p, *tmp_m ;
+
+    struct fec_parms *retval ;
+
+    if (fec_initialized == 0)
+       init_fec();
+
+    if (k > GF_SIZE + 1 || n > GF_SIZE + 1 || k > n ) {
+       fprintf(stderr, "Invalid parameters k %d n %d GF_SIZE %d\n",
+               k, n, GF_SIZE );
+       return NULL ;
+    }
+    retval = my_malloc(sizeof(struct fec_parms), "new_code");
+    retval->k = k ;
+    retval->n = n ;
+    retval->enc_matrix = NEW_GF_MATRIX(n, k);
+    retval->magic = ( ( FEC_MAGIC ^ k) ^ n) ^ (int)(retval->enc_matrix) ;
+    tmp_m = NEW_GF_MATRIX(n, k);
+    /*
+     * fill the matrix with powers of field elements, starting from 0.
+     * The first row is special, cannot be computed with exp. table.
+     */
+    tmp_m[0] = 1 ;
+    for (col = 1; col < k ; col++)
+       tmp_m[col] = 0 ;
+    for (p = tmp_m + k, row = 0; row < n-1 ; row++, p += k) {
+       for ( col = 0 ; col < k ; col ++ )
+           p[col] = gf_exp[modnn(row*col)];
+    }
+
+    /*
+     * quick code to build systematic matrix: invert the top
+     * k*k vandermonde matrix, multiply right the bottom n-k rows
+     * by the inverse, and construct the identity matrix at the top.
+     */
+    TICK(ticks[3]);
+    invert_vdm(tmp_m, k); /* much faster than invert_mat */
+    matmul(tmp_m + k*k, tmp_m, retval->enc_matrix + k*k, n - k, k, k);
+    /*
+     * the upper matrix is I so do not bother with a slow multiply
+     */
+    bzero(retval->enc_matrix, k*k*sizeof(gf) );
+    for (p = retval->enc_matrix, col = 0 ; col < k ; col++, p += k+1 )
+       *p = 1 ;
+    free(tmp_m);
+    TOCK(ticks[3]);
+
+    DDB(fprintf(stderr, "--- %ld us to build encoding matrix\n",
+           ticks[3]);)
+    DEB(pr_matrix(retval->enc_matrix, n, k, "encoding_matrix");)
+    return retval ;
+}
+
+/*
+ * fec_encode accepts as input pointers to n data packets of size sz,
+ * and produces as output a packet pointed to by fec, computed
+ * with index "index".
+ */
+void
+fec_encode(struct fec_parms *code, gf *src[], gf *fec, int index, int sz)
+{
+    int i, k = code->k ;
+    gf *p ;
+
+    if (GF_BITS > 8)
+       sz /= 2 ;
+
+    if (index < k)
+         bcopy(src[index], fec, sz*sizeof(gf) ) ;
+    else if (index < code->n) {
+       p = &(code->enc_matrix[index*k] );
+        bzero(fec, sz*sizeof(gf));
+       for (i = 0; i < k ; i++)
+            addmul(fec, src[i], p[i], sz ) ;
+    } else
+       fprintf(stderr, "Invalid index %d (max %d)\n",
+           index, code->n - 1 );
+}
+
+/*
+ * shuffle move src packets in their position
+ */
+static int
+shuffle(gf *pkt[], int index[], int swaps[], int k)
+{
+    int i;
+
+    for ( i = 0 ; i < k ; ) {
+       if (index[i] >= k || index[i] == i)
+           i++ ;
+       else {
+           /*
+            * put pkt in the right position (first check for conflicts).
+            */
+           int c = index[i] ;
+
+           if (index[c] == c) {
+               DEB(fprintf(stderr, "\nshuffle, error at %d\n", i);)
+               return 1 ;
+           }
+           SWAP(index[i], index[c], int) ;
+           SWAP(pkt[i], pkt[c], gf *) ;
+           SWAP(swaps[i], swaps[c], int) ;
+       }
+    }
+    DEB( /* just test that it works... */
+    for ( i = 0 ; i < k ; i++ ) {
+       if (index[i] < k && index[i] != i) {
+           fprintf(stderr, "shuffle: after\n");
+           for (i=0; i<k ; i++) fprintf(stderr, "%3d ", index[i]);
+           fprintf(stderr, "\n");
+           return 1 ;
+       }
+    }
+    )
+    return 0 ;
+}
+
+/*
+ * build_decode_matrix constructs the encoding matrix given the
+ * indexes. The matrix must be already allocated as
+ * a vector of k*k elements, in row-major order
+ */
+static gf *
+build_decode_matrix(struct fec_parms *code, gf *pkt[], int index[])
+{
+    int i , k = code->k ;
+    gf *p, *matrix = NEW_GF_MATRIX(k, k);
+
+    TICK(ticks[9]);
+    for (i = 0, p = matrix ; i < k ; i++, p += k ) {
+#if 1 /* this is simply an optimization, not very useful indeed */
+       if (index[i] < k) {
+           bzero(p, k*sizeof(gf) );
+           p[i] = 1 ;
+       } else
+#endif
+       if (index[i] < code->n )
+           bcopy( &(code->enc_matrix[index[i]*k]), p, k*sizeof(gf) ); 
+       else {
+           fprintf(stderr, "decode: invalid index %d (max %d)\n",
+               index[i], code->n - 1 );
+           free(matrix) ;
+           return NULL ;
+       }
+    }
+    TICK(ticks[9]);
+    if (invert_mat(matrix, k)) {
+       free(matrix);
+       matrix = NULL ;
+    }
+    TOCK(ticks[9]);
+    return matrix ;
+}
+
+/*
+ * fec_decode receives as input a vector of packets, the indexes of
+ * packets, and produces the correct vector as output.
+ *
+ * Input:
+ *     code: pointer to code descriptor
+ *     pkt:  pointers to received packets. They are modified
+ *           to store the output packets (in place)
+ *     index: pointer to packet indexes (modified)
+ *     sz:    size of each packet
+ */
+int
+fec_decode(struct fec_parms *code, gf *pkt[], int index[], int sz)
+{
+    gf *m_dec ; 
+    gf **new_pkt ;
+    int row, col , k = code->k ;
+    int *swaps, i;
+
+    swaps = my_malloc (k * sizeof (int), "new index table");
+    for (i = 0 ; i < k ; i++ )
+       swaps[i] = i;
+
+    if (GF_BITS > 8)
+       sz /= 2 ;
+
+    if (shuffle(pkt, index, swaps, k)) /* error if true */
+       return 1 ;
+    m_dec = build_decode_matrix(code, pkt, index);
+
+    if (m_dec == NULL)
+       return 1 ; /* error */
+    /*
+     * do the actual decoding
+     */
+    new_pkt = my_malloc (k * sizeof (gf * ), "new pkt pointers" );
+    for (row = 0 ; row < k ; row++ ) {
+       if (index[row] >= k) {
+           new_pkt[row] = my_malloc (sz * sizeof (gf), "new pkt buffer" );
+           bzero(new_pkt[row], sz * sizeof(gf) ) ;
+           for (col = 0 ; col < k ; col++ )
+               addmul(new_pkt[row], pkt[col], m_dec[row*k + col], sz) ;
+       }
+    }
+    /*
+     * move pkts to their final destination
+     */
+    for (row = 0 ; row < k ; row++ ) {
+       if (index[row] >= k) {
+           bcopy(new_pkt[row], pkt[row], sz*sizeof(gf));
+           free(new_pkt[row]);
+            index[row] = row;
+       }
+    }
+    free(new_pkt);
+    free(m_dec);
+
+    for (i = 0 ; i < k ; i++ )
+       index[i] = swaps[i];
+    free(swaps);
+
+    return 0;
+}
+
+/*********** end of FEC code -- beginning of test code ************/
+
+#if (TEST || DEBUG)
+void
+test_gf()
+{
+    int i ;
+    /*
+     * test gf tables. Sufficiently tested...
+     */
+    for (i=0; i<= GF_SIZE; i++) {
+        if (gf_exp[gf_log[i]] != i)
+           fprintf(stderr, "bad exp/log i %d log %d exp(log) %d\n",
+               i, gf_log[i], gf_exp[gf_log[i]]);
+
+        if (i != 0 && gf_mul(i, inverse[i]) != 1)
+           fprintf(stderr, "bad mul/inv i %d inv %d i*inv(i) %d\n",
+               i, inverse[i], gf_mul(i, inverse[i]) );
+       if (gf_mul(0,i) != 0)
+           fprintf(stderr, "bad mul table 0,%d\n",i);
+       if (gf_mul(i,0) != 0)
+           fprintf(stderr, "bad mul table %d,0\n",i);
+    }
+}
+#endif /* TEST */

Added: trunk/apps/yafi/fec/fec.h
===================================================================
--- trunk/apps/yafi/fec/fec.h   2006-01-18 20:12:22 UTC (rev 7877)
+++ trunk/apps/yafi/fec/fec.h   2006-01-18 20:14:19 UTC (rev 7878)
@@ -0,0 +1,52 @@
+/*
+ * fec.c -- forward error correction based on Vandermonde matrices
+ * 980614
+ * (C) 1997-98 Luigi Rizzo (luigi at iet.unipi.it)
+ *
+ * Portions derived from code by Phil Karn (karn at ka9q.ampr.org),
+ * Robert Morelos-Zaragoza (robert at spectra.eng.hawaii.edu) and Hari
+ * Thirumoorthy (harit at spectra.eng.hawaii.edu), Aug 1995
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above
+ *    copyright notice, this list of conditions and the following
+ *    disclaimer in the documentation and/or other materials
+ *    provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
+ * PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY,
+ * OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA,
+ * OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
+ * TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
+ * OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY
+ * OF SUCH DAMAGE.
+ */
+
+/*
+ * The following parameter defines how many bits are used for
+ * field elements. The code supports any value from 2 to 16
+ * but fastest operation is achieved with 8 bit elements
+ * This is the only parameter you may want to change.
+ */
+#ifndef GF_BITS
+#define GF_BITS  8     /* code over GF(2**GF_BITS) - change to suit */
+#endif
+
+#define        GF_SIZE ((1 << GF_BITS) - 1)    /* powers of \alpha */
+void fec_free(void *p) ;
+void * fec_new(int k, int n) ;
+void init_fec() ;
+void fec_encode(void *code, void *src[], void *dst, int index, int sz) ;
+int fec_decode(void *code, void *pkt[], int index[], int sz) ;
+
+/* end of file */

Added: trunk/apps/yafi/fec/onionfecmodule.c
===================================================================
--- trunk/apps/yafi/fec/onionfecmodule.c        2006-01-18 20:12:22 UTC (rev 
7877)
+++ trunk/apps/yafi/fec/onionfecmodule.c        2006-01-18 20:14:19 UTC (rev 
7878)
@@ -0,0 +1,234 @@
+#include <Python.h>
+#include "fec.h"
+
+staticforward PyTypeObject fec_CodeType;
+
+static PyObject *FecError;
+
+typedef struct {
+    PyObject_HEAD
+    void *code;
+    int k, n;
+} fec_CodeObject;
+
+static PyObject* fec_code(PyObject* self, PyObject* args) {
+    fec_CodeObject* codeobject;
+    int k, n;
+    
+    if (!PyArg_ParseTuple(args, "ii:code", &k, &n))
+       return NULL;
+
+    codeobject = PyObject_New(fec_CodeObject, &fec_CodeType);
+    codeobject->code = fec_new(k, n);
+    codeobject->k = k;
+    codeobject->n = n;
+    
+    return (PyObject*)codeobject;
+}
+
+static void fec_code_dealloc(PyObject *self) {
+    fec_CodeObject *codeobject = (fec_CodeObject*)self;
+    fec_free(codeobject->code);
+    PyObject_Del(self);
+}
+
+static PyObject* fec_code_encode(fec_CodeObject* self, PyObject* args) {
+    int i, bs=0;
+    int k;
+    int index, size;
+    PyObject* datain;
+    PyObject** blocks;
+    PyObject* dataout;
+    PyObject* block;
+    int bl;
+    char** fdatain;
+    char* fdataout;
+    int error=0;
+
+    k = self->k;
+    if (!PyArg_ParseTuple(args, "OOii:encode",
+                         &datain, &dataout, &index, &size))
+       return NULL;
+
+    if ((i = PySequence_Size(datain)) == -1)
+       return NULL;
+    if (i < k) {
+       PyErr_SetString(FecError, "encode: data too short");
+       return NULL;
+    }
+    if (PyObject_AsWriteBuffer(dataout, &fdataout, &bl)) {
+       return NULL;
+    }
+    if (bl < size) {
+       PyErr_SetString(FecError, "encode: dataout too small");
+       return NULL;
+    }
+    if ((fdatain = (char**)PyMem_Malloc(k * sizeof(char*))) == NULL) {
+       PyErr_SetString(FecError, "encode: could not allocate memory");
+       return NULL;
+    }
+    if ((blocks = (PyObject**)PyMem_Malloc(k * sizeof(PyObject*))) == NULL) {
+       PyErr_SetString(FecError, "encode: could not allocate memory");
+       goto error;
+    }
+    for (i = 0; i < k; i++) {
+       if ((blocks[i] = PySequence_GetItem(datain, i)) == NULL)
+           goto error;
+       bs++;
+       if (PyObject_AsReadBuffer(blocks[i], &(fdatain[i]), &bl))
+           goto error;
+       if (bl < size) {
+           PyErr_SetString(FecError, "encode: block too small");
+           goto error;
+       }
+    }
+    
+    fec_encode(self->code, (void**)fdatain, (void*)fdataout, index, size);
+
+    goto cleanup;
+    
+ error:
+    error = 1;
+
+ cleanup:
+    for (i = 0; i < bs; i++)
+       Py_DECREF(blocks[i]);
+    if (fdatain != NULL) PyMem_Free(fdatain);
+    if (blocks != NULL) PyMem_Free(blocks);
+
+    if (error)
+       return NULL;
+    
+    Py_INCREF(Py_None);
+    return Py_None;
+}
+
+static PyObject* fec_code_decode(fec_CodeObject* self, PyObject* args) {
+    int i, bs=0;
+    int k;
+    int size, bl;
+    PyObject* data;
+    PyObject* ixs;
+    PyObject** blocks;
+    PyObject* ix;
+    char** fdata;
+    int* fixs;
+    int error=0;
+    
+    k = self->k;
+
+    if (!PyArg_ParseTuple(args, "OOi:decode", &data, &ixs, &size))
+       goto error;
+       
+    if ((i = PySequence_Size(data)) == -1)
+       return NULL;
+    if (i < k) {
+       PyErr_SetString(FecError, "decode: data too short");
+       goto error;
+    }
+    if ((fdata = (char**)PyMem_Malloc(k * sizeof(char*))) == NULL) {
+       PyErr_SetString(FecError, "decode: could not allocate memory");
+       goto error;
+    }
+    if ((fixs = (int*)PyMem_Malloc(k * sizeof(int))) == NULL) {
+       PyErr_SetString(FecError, "decode: could not allocate memory");
+       goto error;
+    }
+    if ((blocks = (PyObject**)PyMem_Malloc(k * sizeof(PyObject*))) == NULL) {
+       PyErr_SetString(FecError, "decode: could not allocate memory");
+       goto error;
+    }
+    for (i = 0; i < k; i++) {
+       if ((ix = PyList_GetItem(ixs, i)) == NULL)
+           goto error;
+       if (!PyInt_Check(ix)) {
+           PyErr_SetString(FecError, "decode: non-int in indexes");
+           goto error;
+       }
+       fixs[i] = (int)PyInt_AsLong(ix);
+    }
+    for (i = 0; i < k; i++) {
+       if ((blocks[i] = PySequence_GetItem(data, i)) == NULL)
+           goto error;
+       bs++;
+       if (PyObject_AsWriteBuffer(blocks[i], &(fdata[i]), &bl))
+           goto error;
+       if (bl < size) {
+           PyErr_SetString(FecError, "decode: block too small");
+           goto error;
+       }
+    }
+
+    fec_decode(self->code, (void**)fdata, fixs, size);
+
+    for (i = 0; i < k; i++) {
+       if ((ix = PyInt_FromLong((long)fixs[i])) == NULL)
+           goto error;
+       if (PyList_SetItem(ixs, i, ix))
+           goto error;
+    }
+
+    goto cleanup;
+    
+ error:
+    error = 1;
+    
+ cleanup:
+    for (i = 0; i < bs; i++)
+         Py_DECREF(blocks[i]);
+    if (fdata != NULL) PyMem_Free(fdata);
+    if (fixs != NULL) PyMem_Free(fixs);
+    if (blocks != NULL) PyMem_Free(blocks);
+
+    if (error)
+       return NULL;
+
+    Py_INCREF(Py_None);
+    return Py_None;
+}
+
+static PyMethodDef fec_code_methods[] = {
+    {"encode", (PyCFunction)fec_code_encode, METH_VARARGS,
+     "encode data"},
+    {"decode", (PyCFunction)fec_code_decode, METH_VARARGS,
+     "decode data"},
+    {NULL, NULL, 0, NULL}
+};
+
+static PyObject* fec_code_getattr(PyObject* self, char* name) {
+    return Py_FindMethod(fec_code_methods, self, name);
+}
+
+static PyTypeObject fec_CodeType = {
+    PyObject_HEAD_INIT(NULL)
+    0,
+    "code",
+    sizeof(fec_CodeObject),
+    0,
+    fec_code_dealloc, /*tp_dealloc*/
+    0,                /*tp_print*/
+    fec_code_getattr, /*tp_getattr*/
+    0,                /*tp_setattr*/
+    0,                /*tp_compare*/
+    0,                /*tp_repr*/
+    0,                /*tp_as_number*/
+    0,                /*tp_as_sequence*/
+    0,                /*tp_as_mapping*/
+    0,                /*tp_hash*/
+};
+
+static PyMethodDef fec_methods[] = {
+    { "code", fec_code, METH_VARARGS,
+      "Create a new code object." },
+    { NULL, NULL, 0, NULL }
+};
+
+DL_EXPORT(void) initonionfec8(void) {
+    PyObject *m, *d;
+    
+    fec_CodeType.ob_type = &PyType_Type;
+    m = Py_InitModule("onionfec8", fec_methods);
+    d = PyModule_GetDict(m);
+    FecError = PyErr_NewException("onionfec8.FECError", NULL, NULL);
+    PyDict_SetItemString(d, "FECError", FecError);
+}

Added: trunk/apps/yafi/fec.py
===================================================================
--- trunk/apps/yafi/fec.py      2006-01-18 20:12:22 UTC (rev 7877)
+++ trunk/apps/yafi/fec.py      2006-01-18 20:14:19 UTC (rev 7878)
@@ -0,0 +1,13 @@
+import onionfec_a_1_2
+
+algolist = {'OnionFEC_a_1_2': onionfec_a_1_2}
+
+def FEC(algoname):
+    if algoname in algolist:
+        return algolist[algoname].FEC()
+    else:
+        raise FECError, 'Unknown FEC algorithm.'
+
+class FECError(Exception):
+    pass
+

Added: trunk/apps/yafi/freenet.py
===================================================================
--- trunk/apps/yafi/freenet.py  2006-01-18 20:12:22 UTC (rev 7877)
+++ trunk/apps/yafi/freenet.py  2006-01-18 20:14:19 UTC (rev 7878)
@@ -0,0 +1,753 @@
+"""freenet.py
+
+"""
+
+import time
+import threading
+import Queue
+import random
+import zipfile
+import tempfile
+import os
+import cPickle as pickle
+import math
+from copy import copy
+from sha import sha
+
+import mimedata
+import fcp
+from fcp import DNFError, RNFError, FormatError, URIError, TimeoutError, 
FailedError, KeyCollisionError
+import fec
+from fec import FECError
+import metadata as metadatalib
+
+
+MAXHTL = 20
+MAXSIZE = 0x100000  # Max size for a CHK block (1 MiB)
+DEFAULT_FEC = 'OnionFEC_a_1_2'
+
+def _shuffle(lst):
+    l = len(lst)
+    for dummy in range(int(l * 1.5)):
+        a = random.randrange(l)
+        b = random.randrange(l)
+        lst[a], lst[b] = lst[b], lst[a]
+
+class Node:
+    def __init__(self, host='localhost', port=8481, statevent=None,
+                 tmpdir='.tmp'):
+        self._host = host
+        self._port = port
+        self._statevent = statevent
+        self._tmpdir = tmpdir
+        try:
+            os.stat(self._tmpdir)
+        except OSError:
+            os.mkdir(self._tmpdir)
+
+        self._f = fcp.FCP(self._host, self._port, self._statevent)
+
+        self._initstatus()
+        self._threadpool = [] #Used to pass around splitfile status
+
+    def get(self, key, aggressive=False, htl=MAXHTL, threads=15,
+            healingthreads=10, retries=2, removelocal=False):
+        """Get a piece of data from Freenet.
+        """
+        self._aggressive=aggressive
+        self._htl = htl
+        self._retries = retries
+        self._nthreads = threads
+        self._hthreads = healingthreads
+        self._removelocal = removelocal
+        self._successblocks = []
+
+        return self._get(key)
+
+    def put(self, data, metadata='', htl=MAXHTL, removelocal=True,
+            privkey=None, cryptokey='', name=None, threads=5, aggressive=True,
+            filename=None):
+        """Insert a file into Freenet.
+        """
+        self._htl = htl
+        self._nthreads = threads
+        self._removelocal = removelocal
+        self._aggressive = aggressive
+        self._filename = filename
+
+        if len(data) > MAXSIZE:
+            return self._putsplitfile(data, privkey=privkey, 
cryptokey=cryptokey, name=name)
+        else:
+            self._setstatus('Active')
+            try:
+                res = self._put(data, metadata=metadata, privkey=privkey, 
cryptokey=cryptokey, name=name)
+            finally:
+                self._setstatus('Stopped')
+
+            return res
+
+    def heal(self, data, key, htl=MAXHTL, removelocal=True, threads=10,
+             percent=1.0, skipblocks=[]):
+        self._htl = htl
+        self._nthreads = threads
+        self._removelocal = removelocal
+        self._aggressive = True
+
+        while True:
+            try:
+                metadata = (self._f.get(key, htl=self._htl, 
removelocal=False))[0]
+            except (DNFError, RNFError, TimeoutError):
+                continue
+            break
+
+        docs = metadatalib.parse(metadata)
+        nblocks = int(docs[0]['SplitFile.BlockCount'], 16)
+        ncheckblocks = int(docs[0]['SplitFile.CheckBlockCount'], 16)
+
+        encoder = fec.FEC(DEFAULT_FEC)
+        blocks, checkblocks = encoder.encode(data)
+        del data
+
+        if len(blocks) != nblocks or len(checkblocks) != ncheckblocks:
+            raise HealingError
+        for i in range(len(blocks)):
+            if 'freenet:' + self._f.genCHK(blocks[i]) != \
+               docs[0]['SplitFile.Block.' + hex(i + 1)[2:]]:
+                raise HealingError
+        for i in range(len(checkblocks)):
+            if 'freenet:' + self._f.genCHK(checkblocks[i]) != \
+               docs[0]['SplitFile.CheckBlock.' + hex(i + 1)[2:]]:
+                raise HealingError
+
+        blocks += checkblocks
+
+        allblocks = []
+        for i in range(len(blocks)):
+            if i not in skipblocks:
+                allblocks.append(blocks[i])
+        del blocks, checkblocks
+        for i in range(len(allblocks) - int((len(allblocks) * percent))):
+            j = random.randrange(len(allblocks))
+            allblocks = allblocks[:j] + allblocks[j+1:]
+
+        self._setstatus('Healing', (len(allblocks),))
+        self._putblocks(allblocks, threads=self._hthreads)
+        self._setstatus('Stopped')
+
+    def rawget(self, key, htl=MAXHTL):
+        self._setstatus('Active')
+        try:
+            metadata, data = self._f.get(key, htl=htl)
+        except fcp.Error:
+            self._setstatus('Stopped')
+            raise
+        return metadata
+
+    def _get(self, key):
+        self._setstatus('Active')
+        i = key.rfind('//')
+        if i != -1:
+            filename = key[i + 2:]
+        else:
+            filename = None
+    
+        try:
+            metadata, data = self._f.get(key, htl=self._htl, 
removelocal=self._removelocal)
+        except fcp.Error:
+            self._setstatus('Stopped')
+            raise
+        if not data or filename is not None:
+            # Deal with the redirects.    
+            if filename is None:
+                filename = ''
+    
+            if data.startswith(zipfile.stringFileHeader):   # We have a 
container site.
+                return self._getfromzip(data, filename)
+            else:   # A regular redirect.
+                return self._getdoc(filename, metadata, data)
+        else:   # Finally, no redirects!
+            self._setstatus('Stopped')
+            return data
+
+    def _get_dbr(self, key, filename, increment, offset):
+        key = metadatalib.dbrkey(key, increment, offset)
+        try:
+            metadata, data = self._f.get(key, htl=self._htl, 
removelocal=self._removelocal)
+        except fcp.Error:
+            self._setstatus('Stopped')
+            raise
+        return self._getdoc(filename, metadata, data)
+    
+    def _getfromzip(self, zip, filename):
+        n = ziptempfile(sha(zip).hexdigest(), self._tmpdir)
+        t = open(n, 'w')
+        t.write(zip)
+        t.flush()
+        z = zipfile.ZipFile(t.name)
+        try:
+            data = z.read(filename)
+        except KeyError:
+            raise KeyNotFoundError, filename
+        z.close()
+        t.close()
+        os.remove(n)
+        self._setstatus('Stopped')
+        return data
+
+    def _getdoc(self, filename, metadata, data):
+        """Gets filename from a redirect in the metadata.
+        """
+        documents = metadatalib.parse(metadata)
+        if documents == [] and filename == '':
+            # Some real dipshit has inserted a file with no redirects when we 
were expecting one
+            # data is probably empty
+            return data
+
+        for doc in documents:
+            if doc['Name'] == filename:
+                if doc.has_key('Redirect.Target'):
+                    self._setstatus('Redirect')
+                    return self._get(doc['Redirect.Target'])
+                elif doc.has_key('DateRedirect.Target'):
+                    self._setstatus('DBR')
+                    return self._get_dbr(doc['DateRedirect.Target'], filename,
+                           doc.setdefault('DateRedirect.Increment', 86400),
+                           doc.setdefault('DateRedirect.Offset', 0))
+                else:
+                    return self._getsplitfile(doc)
+
+        # The specified filename wasn't among the redirects; look for an 
unnamed redirect.
+        for doc in documents:
+            if doc.has_key('DateRedirect.Target') and doc['Name'] == '':
+                self._setstatus('DBR')
+                return self._get_dbr(doc['DateRedirect.Target'], filename,
+                doc.setdefault('DateRedirect.Increment', 86400),
+                doc.setdefault('DateRedirect.Offset', 0))
+    
+        # We can't find the specified key in the manifest; we're hosed.
+        self._setstatus('Stopped')
+        raise KeyNotFoundError, filename
+    
+    def _getsplitfile(self, doc):
+        def hexnums(n):
+            return [hex(n)[2:] for n in range(1, n+1)]
+
+        try:
+            decoder = fec.FEC(doc['SplitFile.AlgoName'])
+        except fec.FECError:
+            self._setstatus('Stopped')
+            raise
+        size = int(doc['SplitFile.Size'], 16)
+        nblocks = int(doc['SplitFile.BlockCount'], 16)
+        ncheckblocks = int(doc['SplitFile.CheckBlockCount'], 16)
+        blockhexs = ['SplitFile.Block.' + n for n in hexnums(nblocks)]
+        checkblockhexs = ['SplitFile.CheckBlock.' + n for n in 
hexnums(ncheckblocks)]
+        nsegments = (nblocks / decoder.MAXBLOCKS) + 1
+        segtempfiles = []
+        self._setstatus('Splitfile', ('Size', int(doc['SplitFile.Size'], 16), 
nsegments))
+
+        #TODO: the status should have something about segments that are skipped
+        if nsegments > 1:
+            for seg in range(nsegments):
+                segfile = splittempfile(doc, seg, self._tmpdir)
+                try:
+                    os.stat(segfile)
+                except OSError:
+                    res = self._getsegment(blockhexs, checkblockhexs, doc, 
decoder, seg)
+                    self._setstatus('Decoding')
+                    if seg < (nsegments - 1):
+                        blocksize = len(res[0][1])
+                        segblocks = decoder.MAXBLOCKS
+                        segcheckblocks = decoder.MAXBLOCKS / 2
+                        segsize = blocksize * segblocks
+                    else:   # final segment
+                        segblocks = len(res)
+                        segcheckblocks = len(checkblockhexs[(seg * 
decoder.MAXBLOCKS) / 2:((seg * decoder.MAXBLOCKS) + decoder.MAXBLOCKS) / 2])
+                        segsize = size - ((MAXSIZE * decoder.MAXBLOCKS) * 
(nsegments - 1))
+                    segment = decoder.decode(res, segsize, segblocks, 
segcheckblocks)
+                    del res
+                    o = open(segfile, 'w')
+                    doneblocks = self.getstatus()['DoneBlocks']
+                    doneblocks = [a + (seg * decoder.MAXBLOCKS) for a in 
doneblocks]
+                    pickle.dump((doneblocks, segment), o)
+                    del segment
+                    o.close()
+                    segtempfiles.append(segfile)
+                else:
+                    segtempfiles.append(segfile)
+             
+            # now put all the segments together
+            res = ""
+            for filename in segtempfiles:
+                o = open(filename)
+                doneblocks, seg = pickle.load(o)
+                self._successblocks += doneblocks
+                res += seg
+                o.close()
+                os.remove(filename)
+
+        else:   # simplify things if we only have one segment
+            segment = self._getsegment(blockhexs, checkblockhexs, doc, 
decoder, 0)
+
+            self._setstatus('Decoding')
+            res = decoder.decode(segment, size, nblocks, ncheckblocks)
+
+            self._successblocks = self.getstatus()['DoneBlocks']
+
+        self._setstatus('Stopped')
+
+        # paranoid SHA check
+        if doc.has_key('Info.Checksum'):
+            if doc['Info.Checksum'] != sha(res).hexdigest():
+                raise ChecksumFailedError
+
+        return res
+
+    def successblocks(self):
+        return self._successblocks
+
+    def _getsegment(self, blockhexs, checkblockhexs, doc, decoder, segment=0):
+        nblocks = len(blockhexs)
+        ncheckblocks = len(checkblockhexs)
+        nsegments = (nblocks / decoder.MAXBLOCKS) + 1
+        offset = segment * decoder.MAXBLOCKS
+        offset_end = (offset + decoder.MAXBLOCKS)
+        blockhexs = blockhexs[offset:offset_end]
+        checkblockhexs = checkblockhexs[offset / 2:offset_end / 2]
+        nblocks = len(blockhexs)
+        ncheckblocks = len(checkblockhexs)
+
+        self._setstatus('Splitfile', ('Segment', segment + 1,
+                        (nblocks, nblocks + ncheckblocks)))
+
+        keyinfo = {}    # key -> (index, failed download attempts)
+        for block in blockhexs:
+            keyinfo[doc[block]] = (int(block[block.rfind('.') + 1:], 16) - 1 - 
offset, 0)
+        for block in checkblockhexs:
+            keyinfo[doc[block]] = (int(block[block.rfind('.') + 1:], 16) - 1 + 
nblocks - (offset / 2), 0)
+        blocks = Queue.Queue(0)
+        allblockhexs = blockhexs + checkblockhexs
+        _shuffle(allblockhexs)          # Try getting blocks in random order.
+        for i in allblockhexs:
+            blocks.put(doc[i])
+
+        # Initialize the thread pool.
+        reqq = Queue.Queue(0)
+        self._reqq = reqq
+        dataq = Queue.Queue(0)
+        for i in range(self._nthreads):
+            self._threadpool.append(RetrievalThread(reqq, dataq, self._host, 
self._port, self._statevent, self))
+        for t in self._threadpool:
+            t.start()
+
+        blocksrcvd = 0
+        blocksfailed = 0
+        doneblocks = []
+
+        for i in self._threadpool:
+            if not blocks.empty():
+                block = blocks.get()
+                self._setstatus('Splitfile', ('start', keyinfo[block][0]))
+                reqq.put((block, self._htl, keyinfo[block][0]))
+
+        while blocksrcvd < nblocks and \
+              ((self._aggressive and (blocksfailed + blocksrcvd < nblocks + 
ncheckblocks)) or \
+              (blocksfailed < ncheckblocks)):
+            try:
+                metadata, data, key = dataq.get(timeout=1)
+            except Queue.Empty:
+                continue
+            if metadata is not None:    # We got the data.
+                doneblocks.append((keyinfo[key][0], data))
+                blocksrcvd += 1
+                self._setstatus('Splitfile', ('success', keyinfo[key][0]))
+            elif data == 'RNF' or data == 'Timeout' or data == 'Other':
+                self._setstatus('Splitfile', ('RNF', keyinfo[key][0]))
+                blocks.put(key)
+            elif keyinfo[key][1] < self._retries:   # data == 'DNF'
+                keyinfo[key] = (keyinfo[key][0], keyinfo[key][1] + 1)
+                self._setstatus('Splitfile', ('failed', keyinfo[key][0]))
+                blocks.put(key)
+            else:
+                blocksfailed += 1
+                self._setstatus('Splitfile', ('failed', keyinfo[key][0]))
+
+            if not blocks.empty():
+                block = blocks.get()
+                self._setstatus('Splitfile', ('start', keyinfo[block][0]))
+                reqq.put((block, self._htl, keyinfo[block][0]))
+
+        # Close the thread pool.
+        self.close()
+
+        if len(doneblocks) < nblocks:
+            raise SegmentFailedError, (segment, len(doneblocks), nblocks)
+
+        return doneblocks
+
+    def _put(self, data, metadata='', privkey=None, cryptokey='', name=None):
+        """This is for putting single (<=1MiB) keys.
+        """
+        if name is None:
+            if privkey is not None:
+                raise Error
+            else:
+                key = 'CHK@'
+        else:
+            if privkey is None:
+                key = 'KSK@%s' % name
+            else:
+                key = 'SSK@%s,%s/%s' % (privkey, cryptokey, name)
+
+        while True:
+            try:
+                return self._f.put(data=data, metadata=metadata, URI=key, 
removelocal=self._removelocal, htl=self._htl)
+            except RNFError, TimeoutError:
+                if self._aggressive:
+                    self._setstatus('Retry')
+                    continue
+                else:
+                    raise
+
+    def _putsplitfile(self, data, privkey=None, cryptokey='', name=None, 
factor=1.5):
+
+        self._setstatus('Encoding')
+
+        encoder = fec.FEC(DEFAULT_FEC)
+
+        checksum = sha(data).hexdigest()
+        fecfile = fectempfile(checksum, self._tmpdir)
+        try:
+            o = open(fecfile)
+        except IOError:
+            blocks, checkblocks = encoder.encode(data)
+            o = open(fecfile, 'w')
+            pickle.dump((blocks, checkblocks), o)
+            o.close()
+        else:
+            blocks, checkblocks = pickle.load(o)
+            o.close()
+
+        metadatafile = metadatatempfile(checksum, self._tmpdir)
+        try:
+            o = open(metadatafile)
+        except IOError:
+            metadata = metadatalib.splitfilecreate(data, 
filename=self._filename, blocks=blocks, checkblocks=checkblocks, f=self._f, 
encoder=encoder)
+            o = open(metadatafile, 'w')
+            o.write(metadata)
+            o.close()
+        else:
+            metadata = o.read()
+            o.close()
+
+        self._setstatus('Active')
+        # Insert the manifest.
+        try:
+            res = self._put(data='', metadata=metadata, privkey=privkey, 
cryptokey=cryptokey, name=name)['URI']
+        except KeyCollisionError, e:
+            res = e[0]
+
+        # Insert the blocks.
+        self._setstatus('Splitfile', ('Put', len(data) * factor, len(blocks) + 
len(checkblocks), res))
+        self._putblocks(blocks + checkblocks, threads=self._nthreads)
+
+        self._setstatus('Stopped')
+
+        # Clean up; we should only reach this on successful insertion.
+        os.remove(fecfile)
+        os.remove(metadatafile)
+
+        return {'URI': res}
+
+    def _putblocks(self, blocks, threads):
+        """Takes a bunch of <1MiB blocks and sticks them in the network.
+        """
+        blockq = Queue.Queue(0)
+        allblocks = {}
+        for i in range(len(blocks)):
+            allblocks[i] = blocks[i]
+        blockkeys = allblocks.keys()
+        _shuffle(blockkeys)
+        totalblocks = len(blockkeys)
+        for i in blockkeys:
+            blockq.put(i)
+
+        # Initialize the thread pool.
+        reqq = Queue.Queue(0)
+        self._reqq = reqq
+        dataq = Queue.Queue(0)
+        for i in range(threads):
+            self._threadpool.append(InsertionThread(reqq, dataq, self._host, 
self._port, self._statevent, self))
+        for t in self._threadpool:
+            t.start()
+
+        for i in self._threadpool:
+            if not blockq.empty():
+                index = blockq.get()
+                data = allblocks[index]
+                self._setstatus('Splitfile', ('start', index))
+                reqq.put((data, self._htl, index))
+
+        doneblocks = 0
+
+        while doneblocks < totalblocks:
+            try:
+                result, index = dataq.get(timeout=1)
+            except Queue.Empty:
+                continue
+            if result is True:      # successful insertion
+                doneblocks += 1
+                self._setstatus('Splitfile', ('success', index))
+            else:
+                blockq.put(index)
+                self._setstatus('Splitfile', ('failed', index))
+            if not blockq.empty():
+                index = blockq.get()
+                data = allblocks[index]
+                self._setstatus('Splitfile', ('start', index))
+                reqq.put((data, self._htl, index))
+
+        # Close the thread pool.
+        self.close()
+
+    def getstatus(self):
+        """Returns a dictionary containing the state of the Node.
+
+           'State': 'Stopped', 'Active', 'Splitfile', 'Decoding'
+             'Stopped': the node is idle
+             'Decoding': the node is Decoding a splitfile segment
+               'Segment': (current segment, total segments)
+             'Active': transferring a non-splitfile
+               'Redirects': list of 'DBR' or 'Redirect'
+               'Retries': times an insert has retried
+               'Transfers': FCP status of transfer
+             'Splitfile':
+               'Segment': (current segment, total segments)
+               'Blocks': (required blocks, total blocks in segment)
+               'ActiveBlocks': list of blocks currently active
+               'DoneBlocks': list of blocks successfully retrieved
+               'FailedBlocks': list of blocks that have failed
+                               a block may appear here more than once
+               'Transfers': (block number, FCP status for that block)
+               'Size': total size in bytes
+
+        """
+        self._statlock.acquire()
+        s = copy(self._status)
+        self._statlock.release()
+        if s['State'] == 'Active':
+            s['Transfers'] = self._f.getstatus()
+            return s
+        elif s['State'] == 'Splitfile' or \
+             s['State'] == 'Healing':
+            s['Transfers'] = []
+            # getstatus() for a RetrievalThread includes the block number
+            for t in self._threadpool:
+                s['Transfers'].append(t.getstatus())
+            return s
+        else:
+            return s
+
+    def _setstatus(self, state, ext=''):
+        """Don't call this yourself!"""
+        self._statlock.acquire()
+        if state == 'Stopped':
+            self._status['State'] = 'Stopped'
+        elif state == 'Active':
+            if self._status['State'] == 'Stopped' or \
+               self._status['State'] == 'Splitfile' or \
+               self._status['State'] == 'Encoding':
+                self._status['State'] = 'Active'
+                self._status['Redirects'] = []
+                self._status['Retries'] = 0
+            elif self._status['State'] == 'Active':
+               pass
+            else:
+                print 'Crap! An error in Freenet setstatus!'
+        elif state == 'DBR':
+            self._status['Redirects'].append('DBR')
+        elif state == 'Redirect':
+            self._status['Redirects'].append('Redirect')
+        elif state == 'Retry':
+            self._status['Retries'] += 1
+        elif state == 'Decoding':
+            self._status['State'] = 'Decoding'
+        elif state == 'Encoding':
+            self._status['State'] = 'Encoding'
+        elif state == 'Healing':
+            self._status['State'] = 'Healing'
+            self._status['Blocks'] = ext[0]
+            self._status['ActiveBlocks'] = []
+            self._status['DoneBlocks'] = []
+            self._status['FailedBlocks'] = []
+            self._status['AllFailedBlocks'] = []
+        elif state == 'Splitfile':
+            if ext[0] == 'Size':
+                self._status['State'] = 'Splitfile'
+                self._status['Size'] = ext[1]
+                self._status['Segment'] = (0, ext[2])
+            elif ext[0] == 'Put':
+                self._status['State'] = 'Splitfile'
+                self._status['Size'] = ext[1]
+                self._status['Blocks'] = ext[2]
+                self._status['Key'] = ext[3]
+                self._status['ActiveBlocks'] = []
+                self._status['DoneBlocks'] = []
+                self._status['FailedBlocks'] = []
+                self._status['AllFailedBlocks'] = []
+            elif ext[0] == 'Segment':
+                self._status['State'] = 'Splitfile'
+                self._status['Segment'] = (ext[1], self._status['Segment'][1])
+                self._status['Blocks'] = ext[2]
+                self._status['ActiveBlocks'] = []
+                self._status['DoneBlocks'] = []
+                self._status['FailedBlocks'] = []
+                self._status['AllFailedBlocks'] = []
+            elif self._status['State'] == 'Splitfile' or \
+                 self._status['State'] == 'Healing':
+                if ext[0] == 'start':
+                    self._status['ActiveBlocks'].append(ext[1])
+                elif ext[0] == 'success':
+                    self._status['DoneBlocks'].append(ext[1])
+                    self._status['ActiveBlocks'].remove(ext[1])
+                    while ext[1] in self._status['FailedBlocks']:
+                        self._status['FailedBlocks'].remove(ext[1])
+                elif ext[0] == 'failed':
+                    self._status['FailedBlocks'].append(ext[1])
+                    self._status['AllFailedBlocks'].append(ext[1])
+                    self._status['ActiveBlocks'].remove(ext[1])
+                elif ext[0] == 'RNF':
+                    self._status['AllFailedBlocks'].append(ext[1])
+                    self._status['ActiveBlocks'].remove(ext[1])
+                else:
+                    print 'Crap! An error in Freenet setstatus!'
+            else:
+                print 'Crap! An error in Freenet setstatus!'
+        else:
+            print 'Crap! An error in Freenet setstatus!'
+        self._statlock.release()
+        if self._statevent is not None:
+            self._statevent.set()
+
+    def _initstatus(self):
+        self._statlock = threading.Lock()
+        self._status = {}
+        self._setstatus('Stopped')
+
+    def close(self):
+        for t in self._threadpool:
+            t.kill()
+        for t in self._threadpool:
+            self._reqq.put((None, None, None))
+        while len(self._threadpool) > 0:
+            for t in self._threadpool:
+                t.join(1)
+                self._threadpool.remove(t)
+
+
+class RetrievalThread(threading.Thread):
+    def __init__(self, requestqueue, dataqueue, host, port, statevent, 
parentnode):
+        threading.Thread.__init__(self)
+        self._reqq = requestqueue
+        self._dataq = dataqueue
+        self._f = fcp.FCP(host, port, statevent)
+        self._index = None
+        self._node = parentnode
+        self.setDaemon(True)
+        self._done = False
+
+    def run(self):
+        while not self._done:
+            key, htl, self._index = self._reqq.get()
+            # Limit our request rate a little bit.
+            f = (self._node.getstatus())['AllFailedBlocks'].count(self._index)
+            t = random.randrange(100 * (f / 2.0), 100 + 100 * pow(f, 2), 10) / 
100.0
+            time.sleep(t)
+            if self._done:
+                break
+            try:
+                metadata, data = self._f.get(key, htl=htl)
+            except DNFError:
+                self._dataq.put((None, 'DNF', key))
+            except RNFError:
+                self._dataq.put((None, 'RNF', key))
+            except TimeoutError:
+                self._dataq.put((None, 'Timeout', key))
+            except FailedError:
+                self._dataq.put((None, 'Other', key))
+            else:
+                self._dataq.put((metadata, data, key))
+
+    def kill(self):
+        self._done = True
+        self._f.kill()
+
+    def getstatus(self):
+        return (self._index, self._f.getstatus())
+
+class InsertionThread(threading.Thread):
+    def __init__(self, requestqueue, dataqueue, host, port, statevent, 
parentnode):
+        threading.Thread.__init__(self)
+        self._reqq = requestqueue
+        self._dataq = dataqueue
+        self._f = fcp.FCP(host, port, statevent)
+        self._index = None
+        self._done = False
+        self._node = parentnode
+
+    def run(self):
+        while not self._done:
+            data, htl, self._index = self._reqq.get()
+            f = (self._node.getstatus())['AllFailedBlocks'].count(self._index)
+            t = random.randrange(100 * (f / 2.0), 100 + 100 * pow(f, 2), 10) / 
100.0
+            time.sleep(t)
+            if self._done:
+                break
+            try:
+                self._f.put(data, htl=htl)
+            except (RNFError, FailedError, TimeoutError):
+                self._dataq.put((False, self._index))
+            except KeyCollisionError:
+                self._dataq.put((True, self._index))
+            else:
+                self._dataq.put((True, self._index))
+
+    def kill(self):
+        self._done = True
+        self._f.kill()
+
+    def getstatus(self):
+        return (self._index, self._f.getstatus())
+
+class Error(Exception):
+    pass
+
+class KeyNotFoundError(Error):
+    pass
+
+class SegmentFailedError(Error):
+    pass
+
+class HealingError(Error):
+    pass
+
+class ChecksumFailedError(Error):
+    pass
+
+def debugwrite(s):
+    sys.stderr.write("%s:\t%s\n" % (threading.currentThread().getName(), s))
+    sys.stderr.flush()
+
+def splittempfile(doc, seg, tmpdir):
+    return tmpdir + '/' + 'yafi-splitfile-' + hex(abs(hash(str(doc))))[2:] + \
+           '-' + str(seg + 1) + '.dat'
+
+def fectempfile(checksum, tmpdir):
+    return tmpdir + '/' + 'yafi-fec-' + checksum + '.dat'
+
+def metadatatempfile(checksum, tmpdir):
+    return tmpdir + '/' + 'yafi-metadata-' + checksum + '.dat'
+
+def ziptempfile(checksum, tmpdir):
+    return tmpdir + '/' + 'yafi-zip-' + checksum + '.dat'
+

Added: trunk/apps/yafi/metadata.py
===================================================================
--- trunk/apps/yafi/metadata.py 2006-01-18 20:12:22 UTC (rev 7877)
+++ trunk/apps/yafi/metadata.py 2006-01-18 20:14:19 UTC (rev 7878)
@@ -0,0 +1,104 @@
+"""metadata.py
+"""
+
+import time
+from sha import sha
+
+import mimedata
+import fec
+
+def parse(metadata):
+    """Parses metadata into document dictionaries.
+    """
+    documents = metadata.split('Document\n')
+    if documents[0].startswith('Version\n'):
+        documents = documents[1:]   #Get rid of the 'Version' part
+    for i in range(len(documents)):
+        documents[i] = documents[i].split('\n')
+        documents[i] = documents[i][:-2]
+        d = {}
+        for j in documents[i]:
+            k = j.split('=')
+            d[k[0]] = k[1]
+        d.setdefault('Name', '')
+        documents[i] = d
+    return documents
+
+def create(docs):
+    res = 'Version\nRevision=1\nEndPart\n'
+    for doc in docs:
+        if doc.has_key('Name') and doc['Name'] == '':
+            doc.pop('Name')
+        res += 'Document\n'
+        sortedkeys = doc.keys()
+        sortedkeys.sort(_mdcmp)
+        for key in sortedkeys:
+            res += "%s=%s\n" % (key, doc[key])
+        res += 'EndPart\n'
+    res = res[:-5] + res[-1:]
+    return res
+
+def splitfilecreate(data, filename=None, blocks=None, checkblocks=None, 
f=None, encoder=None):
+    """Create metadata for a splitfile.
+    """
+    if blocks is None or checkblocks is None:
+        blocks, checkblocks = fec.encode(data)
+
+    blockchks = map(f.genCHK, blocks)
+    checkblockchks = map(f.genCHK, checkblocks)
+    checksum = sha(data).hexdigest()
+    doc = {}
+    if filename is not None:
+        format = mimedata.get_type(filename)
+        if format is not None:
+            doc['Info.Format'] = format
+    doc['Info.Checksum'] = checksum
+    doc['SplitFile.AlgoName'] = encoder.NAME
+    doc['SplitFile.Size'] = hex(len(data))[2:]
+    doc['SplitFile.BlockCount'] = hex(len(blocks))[2:]
+    doc['SplitFile.CheckBlockCount'] = hex(len(checkblocks))[2:]
+    for i in range(len(blockchks)):
+        doc['SplitFile.Block.' + hex(i + 1)[2:]] = 'freenet:' + blockchks[i]
+    for i in range(len(checkblockchks)):
+        doc['SplitFile.CheckBlock.' + hex(i + 1)[2:]] = 'freenet:' + 
checkblockchks[i]
+
+    return create([doc])
+
+def dbrkey(target, increment=86400, offset=0, future=0):
+    """Generate a DBR key.
+    """
+    def dbrdate(increment, offset, future):
+        curtime = int(time.time())
+        i = (curtime - offset) / increment
+        return int((increment * i + offset) - (increment * future))
+    d = hex(dbrdate(increment, offset, future))[2:]
+    br = target.rfind('/') + 1
+    return target[:br] + d + '-' + target[br:]
+
+def _mdcmp(x, y):
+    if x.startswith('SplitFile.Block.') and y.startswith('SplitFile.Block.'):
+        a = int(x[x.rfind('.') + 1:], 16)
+        b = int(y[y.rfind('.') + 1:], 16)
+        if a < b:
+            return -1
+        elif a > b:
+            return 1
+        else:
+            return 0
+    elif x.startswith('SplitFile.CheckBlock.') and 
y.startswith('SplitFile.CheckBlock.'):
+        a = int(x[x.rfind('.') + 1:], 16)
+        b = int(y[y.rfind('.') + 1:], 16)
+        if a < b:
+            return -1
+        elif a > b:
+            return 1
+        else:
+            return 0
+    else:
+        if x < y:
+            return -1
+        elif x > y:
+            return 1
+        else:
+            return 0
+        

Added: trunk/apps/yafi/mimedata.py
===================================================================
--- trunk/apps/yafi/mimedata.py 2006-01-18 20:12:22 UTC (rev 7877)
+++ trunk/apps/yafi/mimedata.py 2006-01-18 20:14:19 UTC (rev 7878)
@@ -0,0 +1,193 @@
+"""mimedata.py
+
+   taken mostly from freenet.client.metadata.MimeTypeUtils
+"""
+
+mimedata = {}
+
+mimedata["rar"]     = "application/x-rar-compressed"
+mimedata["csm"]     = "application/cu-seeme"
+mimedata["cu"]      = "application/cu-seeme"
+mimedata["tsp"]     = "application/dsptype"
+mimedata["xls"]     = "application/excel"
+mimedata["spl"]     = "application/futuresplash"
+mimedata["hqx"]     = "application/mac-binhex40"
+mimedata["doc"]     = "application/msword"
+mimedata["dot"]     = "application/msword"
+mimedata["bin"]     = "application/octet-stream"
+mimedata["oda"]     = "application/oda"
+mimedata["pdf"]     = "application/pdf"
+mimedata["asc"]     = "application/pgp-keys"
+mimedata["pgp"]     = "application/pgp-signature"
+mimedata["ps"]      = "application/postscript"
+mimedata["ai"]      = "application/postscript"
+mimedata["eps"]     = "application/postscript"
+mimedata["ppt"]     = "application/powerpoint"
+mimedata["rtf"]     = "application/rtf"
+mimedata["wp5"]     = "application/wordperfect5.1"
+mimedata["zip"]     = "application/zip"
+mimedata["wk"]      = "application/x-123"
+mimedata["bcpio"]   = "application/x-bcpio"
+mimedata["pgn"]     = "application/x-chess-pgn"
+mimedata["cpio"]    = "application/x-cpio"
+mimedata["deb"]     = "application/x-debian-package"
+mimedata["dcr"]     = "application/x-director"
+mimedata["dir"]     = "application/x-director"
+mimedata["dxr"]     = "application/x-director"
+mimedata["dvi"]     = "application/x-dvi"
+mimedata["pfa"]     = "application/x-font"
+mimedata["pfb"]     = "application/x-font"
+mimedata["gsf"]     = "application/x-font"
+mimedata["pcf"]     = "application/x-font"
+mimedata["pcf.Z"]   = "application/x-font"
+mimedata["gtar"]    = "application/x-gtar"
+mimedata["tgz"]     = "application/x-gtar"
+mimedata["hdf"]     = "application/x-hdf"
+mimedata["phtml"]   = "application/x-httpd-php"
+mimedata["pht"]     = "application/x-httpd-php"
+mimedata["php"]     = "application/x-httpd-php"
+mimedata["php3"]    = "application/x-httpd-php3"
+mimedata["phps"]    = "application/x-httpd-php3-source"
+mimedata["php3p"]   = "application/x-httpd-php3-preprocessed"
+mimedata["class"]   = "application/x-java"
+mimedata["latex"]   = "application/x-latex"
+mimedata["frm"]     = "application/x-maker"
+mimedata["maker"]   = "application/x-maker"
+mimedata["frame"]   = "application/x-maker"
+mimedata["fm"]      = "application/x-maker"
+mimedata["fb"]      = "application/x-maker"
+mimedata["book"]    = "application/x-maker"
+mimedata["fbdoc"]   = "application/x-maker"
+mimedata["mif"]     = "application/x-mif"
+mimedata["com"]     = "application/x-msdos-program"
+mimedata["exe"]     = "application/x-msdos-program"
+mimedata["bat"]     = "application/x-msdos-program"
+mimedata["dll"]     = "application/x-msdos-program"
+mimedata["nc"]      = "application/x-netcdf"
+mimedata["cdf"]     = "application/x-netcdf"
+mimedata["pac"]     = "application/x-ns-proxy-autoconfig"
+mimedata["o"]       = "application/x-object"
+mimedata["pl"]      = "application/x-perl"
+mimedata["pm"]      = "application/x-perl"
+mimedata["shar"]    = "application/x-shar"
+mimedata["swf"]     = "application/x-shockwave-flash"
+mimedata["swfl"]    = "application/x-shockwave-flash"
+mimedata["sit"]     = "application/x-stuffit"
+mimedata["sv4cpio"] = "application/x-sv4cpio"
+mimedata["sv4crc"]  = "application/x-sv4crc"
+mimedata["tar"]     = "application/x-tar"
+mimedata["gf"]      = "application/x-tex-gf"
+mimedata["pk"]      = "application/x-tex-pk"
+mimedata["PK"]      = "application/x-tex-pk"
+mimedata["texinfo"] = "application/x-texinfo"
+mimedata["texi"]    = "application/x-texinfo"
+mimedata["~"]       = "application/x-trash"
+mimedata["%"]       = "application/x-trash"
+mimedata["bak"]     = "application/x-trash"
+mimedata["old"]     = "application/x-trash"
+mimedata["sik"]     = "application/x-trash"
+mimedata["t"]       = "application/x-troff"
+mimedata["tr"]      = "application/x-troff"
+mimedata["roff"]    = "application/x-troff"
+mimedata["man"]     = "application/x-troff-man"
+mimedata["me"]      = "application/x-troff-me"
+mimedata["ms"]      = "application/x-troff-ms"
+mimedata["ustar"]   = "application/x-ustar"
+mimedata["src"]     = "application/x-wais-source"
+mimedata["wz"]      = "application/x-wingz"
+mimedata["au"]      = "audio/basic"
+mimedata["snd"]     = "audio/basic"
+mimedata["mid"]     = "audio/midi"
+mimedata["midi"]    = "audio/midi"
+mimedata["mpga"]    = "audio/mpeg"
+mimedata["mpega"]   = "audio/mpeg"
+mimedata["mp2"]     = "audio/mpeg"
+mimedata["mp3"]     = "audio/mpeg"
+mimedata["m3u"]     = "audio/mpegurl"
+mimedata["ogg"]     = "audio/ogg"
+mimedata["sid"]     = "audio/psid"
+mimedata["aif"]     = "audio/x-aiff"
+mimedata["aiff"]    = "audio/x-aiff"
+mimedata["aifc"]    = "audio/x-aiff"
+mimedata["gsm"]     = "audio/x-gsm"
+mimedata["ra"]      = "audio/x-pn-realaudio"
+mimedata["rm"]      = "audio/x-pn-realaudio"
+mimedata["ram"]     = "audio/x-pn-realaudio"
+mimedata["rpm"]     = "audio/x-pn-realaudio-plugin"
+mimedata["wav"]     = "audio/x-wav"
+mimedata["iso"]     = "binary/cdimage"
+mimedata["gz"]      = "binary/gzip-compressed"
+mimedata["gif"]     = "image/gif"
+mimedata["ief"]     = "image/ief"
+mimedata["jpeg"]    = "image/jpeg"
+mimedata["jpg"]     = "image/jpeg"
+mimedata["jpe"]     = "image/jpeg"
+mimedata["png"]     = "image/png"
+mimedata["tiff"]    = "image/tiff"
+mimedata["tif"]     = "image/tiff"
+mimedata["ras"]     = "image/x-cmu-raster"
+mimedata["bmp"]     = "image/x-ms-bmp"
+mimedata["pnm"]     = "image/x-portable-anymap"
+mimedata["pbm"]     = "image/x-portable-bitmap"
+mimedata["pgm"]     = "image/x-portable-graymap"
+mimedata["ppm"]     = "image/x-portable-pixmap"
+mimedata["rgb"]     = "image/x-rgb"
+mimedata["xbm"]     = "image/x-xbitmap"
+mimedata["xpm"]     = "image/x-xpixmap"
+mimedata["xwd"]     = "image/x-xwindowdump"
+mimedata["csv"]     = "text/comma-separated-values"
+mimedata["html"]    = "text/html"
+mimedata["htm"]     = "text/html"
+mimedata["mml"]     = "text/mathml"
+mimedata["nfo"]     = "text/plain"
+mimedata["txt"]     = "text/plain"
+mimedata["rtx"]     = "text/richtext"
+mimedata["tsv"]     = "text/tab-separated-values"
+mimedata["h++"]     = "text/x-c++hdr"
+mimedata["hpp"]     = "text/x-c++hdr"
+mimedata["hxx"]     = "text/x-c++hdr"
+mimedata["hh"]      = "text/x-c++hdr"
+mimedata["c++"]     = "text/x-c++src"
+mimedata["cpp"]     = "text/x-c++src"
+mimedata["cxx"]     = "text/x-c++src"
+mimedata["cc"]      = "text/x-c++src"
+mimedata["h"]       = "text/x-chdr"
+mimedata["csh"]     = "text/x-csh"
+mimedata["c"]       = "text/x-csrc"
+mimedata["java"]    = "text/x-java"
+mimedata["moc"]     = "text/x-moc"
+mimedata["p"]       = "text/x-pascal"
+mimedata["pas"]     = "text/x-pascal"
+mimedata["etx"]     = "text/x-setext"
+mimedata["sh"]      = "text/x-sh"
+mimedata["tcl"]     = "text/x-tcl"
+mimedata["tk"]      = "text/x-tcl"
+mimedata["tex"]     = "text/x-tex"
+mimedata["ltx"]     = "text/x-tex"
+mimedata["sty"]     = "text/x-tex"
+mimedata["cls"]     = "text/x-tex"
+mimedata["vcs"]     = "text/x-vCalendar"
+mimedata["vcf"]     = "text/x-vCard"
+mimedata["asf"]     = "video/asf"
+mimedata["avi"]     = "video/avi"
+mimedata["dl"]      = "video/dl"
+mimedata["fli"]     = "video/fli"
+mimedata["gl"]      = "video/gl"
+mimedata["mpeg"]    = "video/mpeg"
+mimedata["mpg"]     = "video/mpeg"
+mimedata["mpe"]     = "video/mpeg"
+mimedata["qt"]      = "video/quicktime"
+mimedata["mov"]     = "video/quicktime"
+mimedata["asx"]     = "video/x-ms-asf"
+mimedata["movie"]   = "video/x-sgi-movie"
+mimedata["vrm"]     = "x-world/x-vrml"
+mimedata["vrml"]    = "x-world/x-vrml"
+mimedata["wrl"]     = "x-world/x-vrml"
+
+def get_type(filename):
+    ext = filename[filename.rfind('.') + 1:]
+    try:
+        return mimedata[ext]
+    except KeyError:
+        return None
+

Added: trunk/apps/yafi/onionfec_a_1_2.py
===================================================================
--- trunk/apps/yafi/onionfec_a_1_2.py   2006-01-18 20:12:22 UTC (rev 7877)
+++ trunk/apps/yafi/onionfec_a_1_2.py   2006-01-18 20:14:19 UTC (rev 7878)
@@ -0,0 +1,127 @@
+import onionfec8 as onionfec
+import thread
+import array
+import cStringIO
+
+#from fec import FECError
+import fec
+
+class FEC:
+    MAXBLOCKS = 128     # Maximum blocks in a single segment
+    NAME = 'OnionFEC_a_1_2'
+
+    def __init__(self):
+        pass
+
+    def encode(self, data, redundancy=0.5):
+        """FEC encode data.
+
+           Argumenta are:
+             data: The data to encode - should be a string.
+             blocksize: The size of each FEC block.
+                        (The actual size may be slightly different.)
+             factor: FEC redundancy factor.
+
+           Returns a 2-tuple: first a list of blocks, then a list of check 
blocks
+        """
+        try:
+            length = len(data)
+            blocksize = self.getblocksize(len(data))
+            k = self._getk(len(data), blocksize)
+            n = self._getn(k, redundancy)
+            if k > self.MAXBLOCKS:   #multisegment time
+                segsize = blocksize * self.MAXBLOCKS
+                blocks = []
+                checkblocks = []
+                for segstart in range(0, length, segsize):
+                    segdata = data[segstart:segstart + segsize]
+                    encseg = self.encode(segdata, redundancy=redundancy)
+                    blocks += encseg[0]
+                    checkblocks += encseg[1]
+                return (blocks, checkblocks)
+
+            c = onionfec.code(k, n)
+
+            datain = []
+            for i in range(k):
+                datain.append(data[i * blocksize:(i + 1) * blocksize])
+            x = blocksize - len(datain[-1])
+            if x > 0:
+                datain[-1] = datain[-1] + (x * '\0')
+            a = array.array('c', blocksize * '\0')
+
+            encoded = []
+            for i in range(n):
+                c.encode(datain, a, i, blocksize)
+                encoded.append(a.tostring())
+            return (encoded[:k], encoded[k:])
+        except onionfec.FECError, e:
+            raise fec.FECError, e[0]
+
+    def decode(self, data, size, blocks, checkblocks):
+        """FEC decode a single segment.
+
+           Arguments are:
+            data: List of splitfile blocks, as pairs; the first element
+                  should be the index of of block, starting at 0, and
+                  the second should be the actual data as a string.
+            size: The total size of the decoded file.
+            blocks, checkblocks: Number of (duh) blocks and checkblocks.
+
+           Returns the decoded data as a string.
+        """
+        if blocks > self.MAXBLOCKS:
+            raise fec.FECError, 'Too many blocks'
+        if len(data) < blocks:
+            raise fec.FECError, 'Not enough blocks'
+
+        try:
+            k = blocks
+            n = blocks + checkblocks
+            blocksize = len(data[0][1])
+            c = onionfec.code(k, n)
+
+            ixs = map(lambda (a, b): a, data)
+            for i in range(n):
+                if not ixs.count(i):
+                    data.append((i, blocksize * '\0'))
+            
+            ddata = []
+            ixs = []
+            for (ix, block) in data:
+                ddata.append(array.array('c', block))
+                ixs.append(ix)
+            blocksize = len(ddata[0])
+            io = cStringIO.StringIO()
+
+            c.decode(ddata, ixs, blocksize)
+
+            for i in range(k - 1):
+                io.write(ddata[ixs[i]].tostring())
+            part = size - ((k - 1) * blocksize)
+            io.write(ddata[ixs[k - 1]][:part].tostring())
+            return io.getvalue()
+        except onionfec.FECError, e:
+            raise fec.FECError, e[0]
+
+    def getblocksize(self, size):
+        if size < 1048576:      #1024 * 1024
+            return 131072       #128 * 1024
+        elif size < 33554432:   #32 * 1024 * 1024
+            return 262144       #256 * 1024
+        elif size < 67108864:   #64 * 1024 * 1024
+            return 524288       #512 * 1024
+        else:                   #128 * 1024 * 1024
+            return 1048576      #1024 * 1024
+
+    def _getk(self, size, blocksize):
+        return (size - 1) / blocksize + 1
+        
+    def _getn(self, k, redundancy):
+        if redundancy > 0.5:
+            raise fec.FECError, 'Too much redundancy'
+        n = k + int(k * redundancy)
+        if n == k:
+            n += 1
+        return n
+

Added: trunk/apps/yafi/yafi
===================================================================
--- trunk/apps/yafi/yafi        2006-01-18 20:12:22 UTC (rev 7877)
+++ trunk/apps/yafi/yafi        2006-01-18 20:14:19 UTC (rev 7878)
@@ -0,0 +1,648 @@
+#!/usr/bin/env python
+
+"""yafi
+
+    This is the actual CLI program that the user interacts with.
+"""
+import threading
+import os
+import sys
+import urllib
+import signal
+import pprint
+import copy
+from optparse import OptionParser
+
+import freenet
+import fcp
+import fec
+
+versionstring = "Yet Another Freenet Interface"
+usagestring = "%prog <command> [options] [URI] ..."
+
+commands = ['put', 'get', 'info', 'chk', 'svk']
+
+globaldata = None
+
+def main(argv=[__name__]):
+    if len(argv) < 2:
+        sys.exit(1)
+    command = argv[1]
+
+    parser = init_parser()
+    (options, args) = parser.parse_args(argv[2:])
+
+    try:
+        for c in commands:
+            if c == command:
+                commandstring = "%s_command(options, args)" % command
+                return eval(commandstring)
+        else:
+            parser.print_help()
+            return 1
+    except KeyboardInterrupt:
+        sys.stdout.write('Manually Cancelled!\n')
+        sys.stdout.flush()
+        if globaldata is not None:
+            globaldata[1].kill()
+            globaldata[0].close()
+
+def init_parser():
+    parser = OptionParser(usage=usagestring, version=versionstring)
+    parser.add_option('-v', '--verbose', action='store_true', dest='verbose',
+                      default=False, help='produce lots of output')
+    parser.add_option('-i', '--infile', dest='infile', metavar='FILE',
+                      help='file to read keys from (or - to read from' \
+                      ' standard input)')
+    parser.add_option('-f', '--file', dest='filename', metavar='FILE',
+                      help='store retrieved data to FILE (- for stdout)')
+    parser.add_option('-d', '--dir', dest='dir',
+                      help='save all files to DIR')
+    parser.add_option('-n', '--host', dest='host', default='localhost',
+                      help='freenet node to use')
+    parser.add_option('-p', '--port', dest='port', type='int', default=8481,
+                      help='port of freenet host')
+    parser.add_option('-o', '--overwrite', action='store_true',
+                      dest='overwrite', default=False, help='force ' \
+                      'overwriting of existing files')
+    parser.add_option('-t', '--threads', dest='threads', type='int',
+                      default=None, metavar='NUM', help='number of splitfile ' 
\
+                      'threads to use')
+    parser.add_option('--healingthreads', dest='hthreads', type='int',
+                      default=10, metavar='NUM', help='number of threads to ' \
+                      'use for splitfile healing')
+    parser.add_option('-a', '--aggressive', dest='aggressive',
+                      action='store_true', default=False, help='keep ' \
+                      'downloading splitfile blocks even' 'after we have ' \
+                      'failed too many')
+    parser.add_option('--raw', dest='raw', action='store_true', default=False,
+                      help='get or put only metadata, no redirects')
+    parser.add_option('-l', '--htl', dest='htl', type='int',
+                      default=freenet.MAXHTL, metavar='NUM', help='Hops to ' \
+                      'live')
+    parser.add_option('-s', '--report', dest='report', action='store_true',
+                      default=False, help='print a report of what has been ' \
+                      'downloaded when done')
+    parser.add_option('-r', '--retries', dest='retries', type='int', default=9,
+                      metavar='NUM', help='number of time to retry each ' \
+                      'splitfile block')
+    parser.add_option('--healing', dest='healpercent', type='int', default=100,
+                      metavar='PERCENT', help='Percent of healing blocks to ' \
+                      'upload')
+    return parser
+
+def get_command(options, keys):
+    global globaldata
+
+    #default number of threads
+    if options.threads is None:
+        options.threads = 20
+
+    #sanitize the keys
+    for i in range(len(keys)):
+        keys[i] = keys[i].strip('"')
+        keys[i] = urllib.unquote(keys[i])
+
+    if len(keys) > 1:
+        if options.filename is not None:
+            options.filename = None
+            print 'Warning: multiple URIs specified; filename option ignored'
+        
+    if options.infile is not None:
+        if options.infile == '-':
+            print "Reading keys from standard input:"
+            infile = sys.stdin
+        else:
+            print "Reading keys from %s:" % options.infile
+            infile = open(options.infile)
+
+        for line in infile.readlines():
+            if line[:3] == 'CHK' or line[:3] == 'KSK' or line[:3] == 'SSK':
+                keys.append(line.strip())
+
+    for key in keys:
+        if (options.filename is None) and \
+           (key[:3] != 'KSK') and \
+           (key.rfind('/') == -1 or 
+            key[key.rfind('/') + 1:] == ''):
+            print 'Error: unable to get filename for', key
+            return 2
+
+    if options.dir is not None:
+        options.dir = os.path.dirname(options.dir)
+        try:
+            os.stat(options.dir)
+        except OSError:
+            os.mkdir(options.dir)
+
+    if options.verbose is True:
+        e = threading.Event()
+    else:
+        e = None
+    n = freenet.Node(options.host, options.port, statevent=e)
+    statthread = StatThread(e, n, direction='In')
+    statthread.start()
+
+    globaldata = (n, statthread)
+
+    successkeys = []
+    failedkeys = []
+    skippedkeys = []
+
+    try:
+        for key in keys:
+            if options.filename != None:
+                if options.filename == '-':
+                    filename = sys.stdout
+                else:
+                    filename = options.filename
+            elif key[:3] == 'KSK':
+                filename = key[4:]
+            else:
+                filename = key[key.rfind('/') + 1:]
+            if options.dir is not None:
+                filename = options.dir + '/' + filename
+
+            if filename is not sys.stdout:
+                print 'Saving', key, 'to', filename, '...'
+                if not options.overwrite:
+                    try:
+                        os.stat(filename)
+                    except OSError:
+                        pass
+                    else:
+                        print "%s already exists!  Skipping download..." % \
+                              filename
+                        skippedkeys.append(key)
+                        continue
+            elif options.verbose is True:
+                print "Writing", key, "to stdout..."
+
+            try:
+                if not options.raw:
+                    data = n.get(key, htl=options.htl, retries=options.retries,
+                                 threads=options.threads,
+                                 aggressive=options.aggressive)
+                else:
+                    data = n.rawget(key, htl=options.htl)
+            except freenet.DNFError:
+                print 'Data not found!'
+                failedkeys.append(key)
+            except freenet.RNFError:
+                print 'Route not found!'
+                failedkeys.append(key)
+            except freenet.FECError, err:
+                print "FEC Error:", err[0]
+                failedkeys.append(key)
+            except freenet.SegmentFailedError, err:
+                print "Splitfile segment %d failed with %d of %d blocks" % \
+                      (err[0] + 1, err[1], err[2])
+                failedkeys.append(key)
+            except (freenet.FormatError, freenet.URIError), err:
+                print "Malformed key:", key
+                failedkeys.append(key)
+            except freenet.KeyNotFoundError, err:
+                print "The key %s was not found in the manifest!" % err[0]
+                failedkeys.append(key)
+            except freenet.TimeoutError:
+                print 'Timed out!'
+                failedkeys.append(key)
+            except freenet.ChecksumFailedError:
+                print 'Checksum failed!'
+                print 'This might be an error in YAFI, or it might be an ' \
+                      'error in Freenet.  Try re-downloading the file.'
+                failedkeys.append(key)
+            else:
+                successkeys.append(key)
+                if filename is sys.stdout:
+                    print data 
+                    if options.verbose:
+                        print 'Success!'
+                else:
+                    file = open(filename, 'w')
+                    file.write(data)
+                    file.close()
+                    print 'Success!'
+
+                # Now do healing.
+                if len(data) > freenet.MAXSIZE and not options.raw and \
+                   options.hthreads > 0:
+                    print 'Doing splitfile healing...'
+                    try:
+                        if options.healpercent > 100 or \
+                           options.healpercent < 1:
+                            raise freenet.HealingError
+                        n.heal(data, key, htl=options.htl,
+                               threads=options.hthreads,
+                               percent=(options.healpercent / 100.0),
+                               skipblocks=n.successblocks())
+                    except freenet.HealingError:
+                        print 'Healing failed!'
+                    else:
+                        print 'Healing was successful!'
+
+    finally:
+        statthread.kill()
+
+    if options.report:
+        if len(successkeys) > 0:
+            print "Successfully retrieved keys:"
+            for key in successkeys:
+                print key
+        if len(failedkeys) > 0:
+            print "Failed keys:"
+            for key in failedkeys:
+                print key
+        if len(skippedkeys) > 0:
+            print "Skipped keys:"
+            for key in skippedkeys:
+                print key
+
+    return 0
+
+def put_command(options, args):
+    global globaldata
+
+    #default number of threads
+    if options.threads is None:
+        options.threads = 15
+
+    if options.verbose is True:
+        e = threading.Event()
+    else:
+        e = None
+    n = freenet.Node(options.host, options.port, statevent=e)
+    statthread = StatThread(e, n, direction='Out')
+    statthread.start()
+
+    globaldata = (n, statthread)
+    successfiles = []
+    failedfiles = []
+    collisionfiles = []
+
+    try:
+        for filename in args:
+            try:
+                f = open(filename)
+            except OSError:
+                print "Failure reading file %s" % filename
+                failedfiles.append(filename)
+                continue
+            data = f.read()
+            f.close()
+
+            print "Inserting %s..." % filename
+
+            try:
+                key = n.put(data, htl=options.htl, threads=options.threads,
+                            removelocal=True, filename=filename)
+            except freenet.RNFError:
+                print 'Route not found!'
+                failedfiles.append(filename)
+            except freenet.TimeoutError:
+                print 'Timed out!'
+                failedfiles.append(filename)
+            except (freenet.FormatError, freenet.URIError), err:
+                print "Unknown error:", filename
+                failedfiles.append(filename)
+            except freenet.KeyCollisionError, err:
+                print "Key collision: %s as %s" % (filename, err[0])
+                collisionfiles.append(err[0] + '/' +
+                                      filename[filename.rfind('/') + 1:])
+            else:
+                if key['URI'].startswith('CHK'):
+                    basename = filename[filename.rfind('/') + 1:]
+                    print 'Success!\nKey is: %s/%s' % (key['URI'], basename)
+                    successfiles.append(key['URI'] + '/' + basename)
+                else:
+                    print 'Success!\nKey is: %s' % key['URI']
+                    successfiles.append(key['URI'])
+    finally:
+        statthread.kill()
+
+    if options.report:
+        if len(successfiles) > 0:
+            print "Successfully inserted files:"
+            for file in successfiles:
+                print file
+        if len(failedfiles) > 0:
+            print "Failed files:"
+            for file in failedfiles:
+                print file
+        if len(collisionfiles) > 0:
+            print "Collided files:"
+            for file in collisionfiles:
+                print file
+
+    return 0
+
+def info_command(options, args):
+    f = fcp.FCP(options.host, options.port)
+    info = f.info()
+    hello = f.hello()
+    for k in hello.keys():
+        info[k] = hello[k]
+    keys = info.keys()
+    keys.sort()
+    for k in keys:
+        print k, ':', info[k]
+
+    return 0
+
+def chk_command(options, filenames):
+    f = fcp.FCP(options.host, options.port)
+    n = freenet.Node(options.host, options.port)
+
+    for filename in filenames:
+        try:
+            o = open(filename)
+        except OSError:
+            print "Failure reading file %s" % filename
+            continue
+        data = o.read()
+        o.close()
+
+        if len(data) > freenet.MAXSIZE:
+            blocks, checkblocks = fec.encode(data)
+            metadata = n.createmetadata(data, filename)
+            chk = f.genCHK(data='', metadata=metadata)
+            print "%s/%s" % (chk, os.path.basename(filename))
+        else:
+            chk = f.genCHK(data)
+            print "%s/%s" % (chk, os.path.basename(filename))
+
+    return 0
+
+def svk_command(options, args):
+    f = fcp.FCP(options.host, options.port)
+    res = f.genSVK()
+    print 'Public Key: ', res['PublicKey']
+    print 'Private Key:', res['PrivateKey']
+    print 'Crypto Key: ', res['CryptoKey']
+
+    return 0
+
+# God, this whole thing is a horrid hack.
+# The GUI will be better, I promise.
+class StatThread(threading.Thread):
+    def __init__(self, event, node, direction):
+        threading.Thread.__init__(self)
+        self._e = event
+        self._n= node
+        self._done = False
+        self._dir = direction
+
+        self._initvars()
+
+        if event is None:   # we don't want output
+            self._done = True
+
+    def run(self):
+        while not self._done:
+            self._e.wait()
+            if self._done:
+                break
+            status = self._n.getstatus()
+            self._e.clear()
+            self._display(status)
+
+    def kill(self):
+        self._done = True
+        if self._e is not None:
+            self._e.set()
+
+    def _initvars(self):
+        self._splitfile = False
+        self._healing = False
+        self._restarts = 0
+        self._redirs = 0
+        self._doneblocks = 0
+        self._failedblocks = 0
+        self._transferring = False
+        self._coding = False
+        self._segment = 0
+        self._pending = 0
+        self._retries = 0
+        self._transfers = []
+
+    def _display(self, status):
+        def fmtstr(n):
+            factor = 1000.0
+            prefixes = ['', 'kilo', 'mega', 'giga', 'tera', 'peta', 'exa']
+            unit = 'byte'
+            c = factor
+            for prefix in prefixes:
+                if n < c:
+                    res = '%.1f' % (n / (c / factor))
+                    res += ' ' + prefix + unit
+                    if n > 1:
+                        res += 's'
+                    break
+                c *= factor
+            return res
+        def fmttransfers(transfers):
+            res = ''
+            lt = None
+            transfers.sort(trnscmp)
+            for t in transfers:
+                if t[1]['State'] != 'Stopped':
+                    lt = t
+            for t in transfers:
+                if t[1]['State'] != 'Stopped':
+                    if t[1]['State'] == 'Transferring':
+                        res += '*'
+                    res += str(t[0] + 1)
+                    if t is not lt:
+                        res += ', '
+            return res
+        def cmptransfers(t1, t2):
+            """Return True if same, False otherwise.
+            """
+            t1.sort(trnscmp)
+            t2.sort(trnscmp)
+            if len(t1) != len(t2):
+                return False
+            for i in range(len(t1)):
+                if t1[i][0] != t2[i][0] or t1[i][1]['State'] != 
t2[i][1]['State']:
+                    return False
+            return True
+        def allstopped(transfers):
+            for t in transfers:
+                if t[1]['State'] != 'Stopped':
+                    return False
+            return True
+        def trnscmp(a, b):
+            if a > b: return 1
+            elif a < b: return -1
+            else: return 0
+        if status['State'] == 'Stopped':
+            self._initvars()
+        elif self._dir == 'In':
+            if status['State'] == 'Active':
+                if self._splitfile:
+                    self._initvars()
+                for s in status['Redirects'][self._redirs:]:
+                    if s == 'DBR':
+                        print 'Following DBR Redirect...'
+                    elif s == 'Redirect':
+                        print 'Following Redirect...'
+                    self._newtransfer = True
+                    self._redirs += 1
+                if status['Transfers']['State'] == 'Transferring':
+                    if not self._transferring:
+                        print 'Transferring', status['Transfers']['DataSize'], 
'bytes...'
+                        self._transferring = True
+                    else:
+                        print status['Transfers']['DataTransferred'], 'bytes 
done...'
+                else:
+                    self._transferring = False
+                if status['Transfers']['State'] == 'Active':
+                    if status['Transfers']['Restarts'] > self._restarts:
+                        print 'Connection restarted...'
+                    self._restarts = status['Transfers']['Restarts']
+            elif status['State'] == 'Splitfile':
+                if not self._splitfile:
+                    print "Splitfile: %s" % fmtstr(int(status['Size']))
+                    self._splitfile = True
+                elif status['Segment'][0] > self._segment:
+                    print "Segment %d of %d" % (status['Segment'][0], 
status['Segment'][1])
+                    print "Blocks: %d required / %d total" % 
(status['Blocks'][0], status['Blocks'][1])
+                    self._segment = status['Segment'][0]
+                    self._doneblocks = 0
+                    self._failedblocks = 0
+                    self._coding = False
+                elif status['Segment'][0] > 0:
+                    doneblocks = len(status['DoneBlocks'])
+                    failedblocks = len(status['FailedBlocks'])
+                    if doneblocks > self._doneblocks:  #new blocks are finished
+                        for n in range(doneblocks - self._doneblocks):
+                            print color("Got block %d" % 
(status['DoneBlocks'][-(n + 1)] + 1), "green")
+                        print "Got %d of %d blocks." % 
(len(status['DoneBlocks']), status['Blocks'][0])
+                        self._doneblocks = doneblocks
+                    elif failedblocks > self._failedblocks:
+                        for n in range(failedblocks - self._failedblocks):
+                            blockid = status['FailedBlocks'][-(n + 1)]
+                            count = status['FailedBlocks'].count(blockid)
+                            if count > 1:
+                                print color("DNF: block %d (%d times)" % 
(blockid + 1, count), "red")
+                            else:
+                                print color("DNF: block %d (1 time)" % 
(blockid + 1), "red")
+                        self._failedblocks = failedblocks
+                    if not cmptransfers(self._transfers, status['Transfers']) 
and not allstopped(status['Transfers']):
+                        print "Transferring blocks", 
fmttransfers(status['Transfers'])  #map(lambda a: a + 1, status['ActiveBlocks'])
+                    self._transfers = copy.copy(status['Transfers'])
+            elif status['State'] == 'Healing':
+                if not self._healing:
+                    print "Healing: %d blocks" % status['Blocks']
+                    self._initvars()
+                    self._healing = True
+                else:
+                    doneblocks = len(status['DoneBlocks'])
+                    failedblocks = len(status['FailedBlocks'])
+                    if doneblocks > self._doneblocks:  #new blocks are finished
+                        for n in range(doneblocks - self._doneblocks):
+                            print color("Inserted block %d." % 
(status['DoneBlocks'][-(n + 1)] + 1), "green")
+                        print "Inserted %d of %d blocks." % 
(len(status['DoneBlocks']), status['Blocks'])
+                        self._doneblocks = doneblocks
+                    if failedblocks > self._failedblocks:
+                        for n in range(failedblocks - self._failedblocks):
+                            blockid = status['FailedBlocks'][-(n + 1)]
+                            count = status['FailedBlocks'].count(blockid)
+                            if count > 1:
+                                print color("Block %d failed %d times" % 
(blockid + 1, count), "red")
+                            else:
+                                print color("Block %d failed 1 time" % 
(blockid + 1), "red")
+                        self._failedblocks = failedblocks
+                    if not cmptransfers(self._transfers, status['Transfers']) 
and not allstopped(status['Transfers']):
+                        print "Transferring blocks", 
fmttransfers(status['Transfers'])  #map(lambda a: a + 1, status['ActiveBlocks'])
+                    self._transfers = copy.copy(status['Transfers'])
+            elif status['State'] == 'Decoding':
+                if not self._coding:
+                    print 'Decoding splitfile... Please wait...'
+                    self._coding = True
+            else:
+                print "DEBUG DEBUG DEBUG DEBUG"
+        elif self._dir == 'Out':
+            if status['State'] == 'Active':
+                if self._splitfile:
+                    self._initvars()
+                    print "Inserting manifest..."
+                if status['Retries'] > self._retries:
+                    print 'Insert failed, retrying...'
+                    self._retries = status['Retries']
+                if status['Transfers']['State'] == 'Transferring':
+                    if not self._transferring:
+                        print 'Data is transferring...'
+                        self._transferring = True
+                    elif status['Transfers']['Pending'] > self._pending:
+                        print 'Transfer is pending...'
+                        self._pending = status['Transfers']['Pending']
+                else:
+                    self._transferring = False
+                if status['Transfers']['State'] == 'Active':
+                    if status['Transfers']['Restarts'] > self._restarts:
+                        print 'Connection restarted...'
+                        self._restarts = status['Transfers']['Restarts']
+            elif status['State'] == 'Splitfile':
+                if not self._splitfile:
+                    print "Splitfile: %s, %d blocks" % 
(fmtstr(int(status['Size'])), status['Blocks'])
+                    print "Key is: %s" % status['Key']
+                    self._splitfile = True
+                else:
+                    doneblocks = len(status['DoneBlocks'])
+                    failedblocks = len(status['FailedBlocks'])
+                    if doneblocks > self._doneblocks:  #new blocks are finished
+                        for n in range(doneblocks - self._doneblocks):
+                            print color("Inserted block %d." % 
(status['DoneBlocks'][-(n + 1)] + 1), "green")
+                        print "Finished %d of %d blocks." % 
(len(status['DoneBlocks']), status['Blocks'])
+                        self._doneblocks = doneblocks
+                    if failedblocks > self._failedblocks:
+                        for n in range(failedblocks - self._failedblocks):
+                            blockid = status['FailedBlocks'][-(n + 1)]
+                            count = status['FailedBlocks'].count(blockid)
+                            if count > 1:
+                                print color("Block %d failed %d times." % 
(blockid + 1, count), "red")
+                            else:
+                                print color("Block %d failed 1 time." % 
(blockid + 1), "red")
+                        self._failedblocks = failedblocks
+                    if not cmptransfers(self._transfers, status['Transfers']) 
and not allstopped(status['Transfers']):
+                        print "Transferring blocks", 
fmttransfers(status['Transfers'])  #map(lambda a: a + 1, status['ActiveBlocks'])
+                    self._transfers = copy.copy(status['Transfers'])
+            elif status['State'] == 'Encoding':
+                if not self._coding:
+                    print 'Encoding Splitfile... Please wait...'
+                    self._coding = True
+            else:
+                print "DEBUG DEBUG DEBUG DEBUG"
+
+# Following color stuff adapted from Andrei Kulakov
+# http://silmarill.org/files/avkutil.py
+colors = {
+    "black"     :   "30",
+    "red"       :   "31",
+    "green"     :   "32",
+    "brown"     :   "33",
+    "blue"      :   "34",
+    "purple"    :   "35",
+    "cyan"      :   "36",
+    "lgray"     :   "37",
+    "gray"      :   "1;30",
+    "lred"      :   "1;31",
+    "lgreen"    :   "1;32",
+    "yellow"    :   "1;33",
+    "lblue"     :   "1;34",
+    "pink"      :   "1;35",
+    "lcyan"     :   "1;36",
+    "white"     :   "1;37"
+    }
+
+def color(text, color):
+    opencol = "\033["
+    closecol = "m"
+    clear = opencol + "0" + closecol
+    fg = opencol + colors[color] + closecol
+    return "%s%s%s" % (fg, text, clear)
+
+if __name__ == '__main__':
+    sys.exit(main(sys.argv)) 
+


Property changes on: trunk/apps/yafi/yafi
___________________________________________________________________
Name: svn:executable
   + *


Reply via email to