[Fink-devel] Re: fink-mirrors breaks selfupdate (FAQ)

2003-12-28 Thread Chris Pepper
Subject: Re: fink-mirrors breaks selfupdate (FAQ)
Newsgroups: gmane.os.apple.fink.devel
Date: Sat, 27 Dec 2003 01:04:17 +0100
pH1nk wrote:

  Ok. I took the mirror folder out of the tempdir folder i created with
  the instructions on the web page, copied that to /sw/lib/fink/ and all
  it working
 
  Woo-hoo
Well, good for you. There is still the question why you and others keep
seeing this problem. There was a window of less than 3 hours two days
ago where this problem could come up for people who were selfupdating
then. It was fixed with the release of fink-0.17.3-1. If it came up much
later (allowing for the synchronization latency of the rsync and anoncvs
servers), there must be some mechanism at work that is yet unexplained.
Unfortunately, none of those claiming to have had this problem later,
even with fink-0.17.3-1, have given a precise account of what they were
doing to get into the problem and out of it.
As far as I am concerned, there is no clear evidence that there should
still be anything wrong, and I consider this affair closed.
	Alas, it's not completely fixed. I have the current fink 
package, but it still fails on fink-mirrors.

	I tried downloading the fink-mirrors tarball directly and 
dropping it into /sw/src, but that doesn't help either.

Chris Pepper
PS-Please CC me -- I'm not currently on this list.
[EMAIL PROTECTED]:/sw/src$ fink list -i
Information about 2190 packages read in 0 seconds.
 i   apt  0.5.4-35 Advanced front-end for dpkg
 i   apt-shlibs   0.5.4-35 Advanced front-end for dpkg
(i)  base-files   1.9.0-1  Directory infrastructure
 i   bzip21.0.2-12 Block-sorting file compressor
 i   bzip2-dev1.0.2-12 Developer files for bzip2 package
 i   bzip2-shlibs 1.0.2-12 Shared libraries for bzip2 package
 i   cctools-extra1:495-1  Extra software from cctools
 i   daemonic 20010902-2   Interface to daemon init scripts
 i   darwin   7.2.0-1  [virtual package representing the kernel]
 i   debianutils  1.23-11  Misc. utilities specific to 
Debian (and F...
 i   dillo0.6.6-3  Small simple web browser
 i   dlcompat-dev 20030629-15  Dynamic loading compatibility 
library dev...
 i   dlcompat-shlibs  20030629-15  shared libraries for dlcompat
 i   dpkg 1.10.9-27The Debian package manager
 i   expat1.95.6-2 C library for parsing XML
 i   expat-shlibs 1.95.6-2 C library for parsing XML
 i   fink 0.17.3-1 The Fink package manager
 i   fink-prebinding  0.7-2Tools for enabling prebinding in Fink
 i   gawk 3.1.2-12 The Awk processing language, GNU edition
 i   gdbm31.8.3-1  GNU dbm library
 i   gdbm3-shlibs 1.8.3-1  Shared libraries for gdbm3 package
 i   gettext  0.10.40-17   Message localization support
 i   gettext-bin  0.10.40-17   Executables for gettext package
 i   gettext-dev  0.10.40-17   Developer files for gettext package
 i   glib 1.2.10-18Common C routines used by Gtk+ 
and other ...
 i   glib-shlibs  1.2.10-18Common C routines used by Gtk+ 
and other ...
 i   gmp  4.1.2-11 GNU multiple precision arithmetic library
 i   gmp-shlibs   4.1.2-11 Shared libraries for gmp package
(i)  gtk+ 1.2.10-25The Gimp Toolkit
(i)  gtk+-data1.2.10-25The Gimp Toolkit
(i)  gtk+-shlibs  1.2.10-25The Gimp Toolkit
 i   gzip 1.2.4a-6 The gzip file compressor
 i   libiconv 1.9.1-11 Character set conversion library
 i   libiconv-bin 1.9.1-11 Executables for libiconv package
 i   libiconv-dev 1.9.1-11 Developer files for libiconv package
 i   libjpeg  6b-6 JPEG image format handling library
 i   libjpeg-bin  6b-6 Executables for libjpeg package
 i   libjpeg-shlibs   6b-6 Shared libraries for libjpeg package
 i   libpcap  0.6.2-6  Network packet capture library
 i   libpcap-shlibs   0.6.2-6  Network packet capture library
 i   libpng3  1.2.5-14 PNG image format handling library
 i   libpng3-shlibs   1.2.5-14 Shared libraries for libpng3 package
(i)  libxml2  2.6.2-1  XML parsing library, version 2
(i)  libxml2-bin  2.6.2-1  XML parsing library, version 2
(i)  libxml2-shlibs   2.6.2-1  XML parsing library, version 2
 i   lynx 2.8.4-22 Console based web browser
 i   macosx   10.3.2-1 [virtual package representing the system]
(i)  minicom  2.1-10   Serial communication program
 i   ncurses  5.3-2003101  Full-screen ascii drawing library
 i   ncurses-dev  5.3-2003101  Development files for ncurses package
 i   ncurses-shlibs   5.3-2003101  Shared libraries for ncurses package
 i   net-snmp 5.0.7-14 SNMP tools and libraries
 i   net-snmp-shlibs  5.0.7-14

[Fink-devel] openssl vs. Panther

2003-10-09 Thread Chris Pepper
	With 6b78, fink (updated from CVS) fails to build openssl:

vsigntca.pem = 18d46017.0
making all in test...
cc -I../include -fPIC -DTHREADS -D_REENTRANT -O3 -D_DARWIN 
-DB_ENDIAN -fno-common -I/sw/include  -c -o bntest.o bntest.c
LD_LIBRARY_PATH=..:$LD_LIBRARY_PATH \
cc -o bntest -I../include -fPIC -DTHREADS -D_REENTRANT -O3 -D_DARWIN 
-DB_ENDIAN -fno-common bntest.o  -L.. -lcrypto
ld: Undefined symbols:
_BN_mod
make[1]: *** [bntest] Error 1
make: *** [sub_all] Error 1
### execution of  failed, exit code 2
Failed: compiling openssl-0.9.6k-1 failed
[EMAIL PROTECTED]:~$ uname -a
Darwin salt.rockefeller.edu 7.0.0 Darwin Kernel Version 7.0.0: Thu 
Sep 11 17:21:11 PDT 2003; root:xnu/xnu-505.obj~2/RELEASE_PPC  Power 
Macintosh powerpc
[EMAIL PROTECTED]:~$ sw_vers
ProductName:Mac OS X
ProductVersion: 10.3
BuildVersion:   7B68
[EMAIL PROTECTED]:~$ fink --version
Package manager version: 0.13.8
Distribution version: 0.5.3.cvs


Chris Pepper
--
Chris Pepper:   http://www.reppep.com/~pepper/
Rockefeller University: http://www.rockefeller.edu/
---
This SF.net email is sponsored by: SF.net Giveback Program.
SourceForge.net hosts over 70,000 Open Source Projects.
See the people who have HELPED US provide better services:
Click here: http://sourceforge.net/supporters.php
___
Fink-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/fink-devel


Re: [Fink-devel] Fink vs. Panther (apt failure)

2003-10-02 Thread Chris Pepper
At 9:55 AM -0400 2003/09/30, Benjamin Reed wrote:
Chris Pepper wrote:

I'm running 7b68 on a dual G4, and bootstrap.sh fails to build 
apt; is a fix or workaround available?
I assume you mean the 0.5.3 bootstrap from the tarball?  That 
definitely won't work...

I checked the archives, but didn't find anything helpful.
Panther is not officially supported.
	I know.

If you want to be able to limp along until we have it officially 
supported, you can bootstrap fink from CVS.  See Updating the 
Package Manager at:

  http://fink.sourceforge.net/doc/cvsaccess/index.php

Not everything will work, and likely anything you build there will 
have incompatibilities with what gets released officially from Fink 
as far as panther support, so be prepared to start over when we have 
something that's working better.  =)
	That's fine. I normally salvage /sw/src and reinstall from 
source periodically anyway.

	This page doesn't talk about how to get started; just how to 
upgrade a working install. I will copy /sw from my Jaguar partition 
onto my Panther partition, and fink selfupdate-cvs; fink 
update-all; news at 11.

Chris
--
Chris Pepper:   http://www.reppep.com/~pepper/
Rockefeller University: http://www.rockefeller.edu/
---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Fink-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/fink-devel


Re: [Fink-devel] parallel downloads...

2002-04-23 Thread Chris Pepper

At 12:03 PM -0400 2002/04/23, Chris Devers wrote:
On Tue, 23 Apr 2002, Max Horn wrote:

  At 9:43 Uhr -0400 23.04.2002, Chris Zubrzycki wrote:
  
  How hard would it be to add code to perform x number of downloads at
  once, where x is set in the config field? just wondering, for people
  who have fast connections.

  First, you would have to do multiple process (forks). Then you have
  to manage those somehow.

Basically, re-implement a kinda backwards Apache, except instead of
serving multiple parallel URLs, you're grabbing them.

Max's points about the complexity of implementing this are all valid. I'll
just add that, in addition to that complexity/overhead/debugging that this
would involve, it's also not clear that it would save much time.

Even given that the design issues are thought through  properly
implemented, I think the best case scenario (assuming that computational
time of running all this is effectively zero  we're bound instead by
bandwidth) is that it takes exactly the same amount of time to download
everything.

That's not true. As you say below, the best case is if your 
local pipe can handle all downloads simultaneously, at the max speed 
the far-end servers can provide them. You could get 5 quick files, 
all simultaneous with getting a single 'slow' file. I know my 
download speeds are rarely the speed of my DSL line, but I can often 
run 4 simultaneously without visibly affecting download speed.

The ideal implementation would be to use multidownload 
capabilities in curl or wget. You'd feed curl a bunch of URLs, and it 
would return when they were all fetched, or failed. Unfortunately, 
fink's needs are more complicated than they're likely to provide, 
since we'd need to provide alternates for each target.

A simple alternative would be for fink to start 6 child 
threads, each of which starts curl download. The parent would just 
sleep until the children all returned, then restarted at the 
beginning of the download process, and discover that all or most of 
the packages aren't present. After the first attempt, fink could run 
in the current single-download try/fail/retry-alternate mode, getting 
human input for which alternatives to try. It's not very elegant, but 
this is deliberately a much simpler approach than building all the 
intelligence we could take advantage of, in terms of alternatives and 
user interaction.

Think about it: instead of four consequitive downloads that take (making
up figures here) ten seconds each, you have four simultaneous downloads
that take forty seconds each, because they're still sharing the same
constrained bandwidth.

You only stand to gain if this scheme can take advantage of different
access paths (a second NIC or modem or something) or if the bottleneck is
the remote server, and not your connection. Sometimes the latter is the
case -- I think we all seem to be having a slow time getting downloads
from Sourceforge's site, for example. But in most cases I don't think
there's going to be enough gain from parallelizing to justify all the work
it'll take to get it to work reliably.


Chris Pepper
-- 
Chris Pepper:  http://www.reppep.com/~pepper/
Rockefeller University:   http://www.rockefeller.edu/

___
Fink-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/fink-devel



Re: [Fink-devel] Fink CD

2002-04-16 Thread Chris Pepper

At 3:39 PM +0200 2002/04/10, Max Horn wrote:
First off, I'd like to keep the discussion regarding a Fink CD 
completly free of mentioning OpenOSX. I don't feel such a CD should 
be made to spite anybody, but if at all for it's own good. Anyway, 
here are some quick thought of mine on this:


1) It will be very helpful to anybody with a not so fast connection; 
to people that want to use an install CD to quickly install this 
stuff on my machines; for people that want to have a local Fink 
server, so to say, where machines behind a restrictive firewall can 
download stuff; etc. I definitly see a use in it.

2) There is a lot of stuff in Fink. I guess we'll have to make at 
least two CDs, one full with source, one with binaries. Ideally, 
both would contain a mini Fink installer that allows you to 
bootstrap Fink; the binary CD(s) should be usable on any Fink 
system, i.e. you plug the CD in, and Fink can get packges from it; 
and the source CD(s) likewise, i.e. it should be possible to get 
sources from it transparently.

Max,

I like the idea of making it easy to produce a local fink 
repository, separate from the installer and marketing issue.

Perhaps we should start with making it easy for an admin to 
build a local repo, for network and/or CD installs. Should we start 
by suggesting fink fetch-all, and does anyone feel like writing 
fink build-all, for all known packages (or those already fetched, 
so users have somewhat finer control)? Can we generalize  simplify 
the process you use for the binary distro, both to make the next 
version easier on you, and to make it easier for others to roll their 
own?

It seems if we made it easier for users with a fast 
connection (including ourselves) to get complete and current sets of 
.info/.patch files, source tarballs, and .debs, we'd make it easier 
for others to build (and customize, if desired) their own fink 
distributions, either by putting them on a local file server, or 
burning their own CDs. This is useful even without a GUI, GUI 
installer, and logo.

Then some fool^H^H^H^Hfine volunteer can build .iso images 
with the source  binary installers, and we have the essential tools 
for CLI users. This paves the way for a GUI installer, should we 
decide to go down that road. If and when we get near that milestone, 
we can decide whether we're ready to charge for anything (or, indeed, 
anyone can decide they want to sell fink, without a consensus, 
hopefully with proper credit).


FWIW, I use Yellow Dog Linux, and agree there's an obvious 
similarity between our source and theirs, but my experiences with 
them and their idea of customer service have been very depressing.

More interesting to me would be Daemon News, a BSD shop that 
already sells Darwin  GNU-Linux CDs 
http://www.daemonnewsmall.com/darwin141.html, 
http://www.linuxiso.org/, which someone mentioned, has FreeBSD  
NetBSD.


Chris Pepper

3) Yeah, having a logo would be nice for a CD, and for other stuff, 
too, but I don't see it as a strict requirement... OK, Justin? 8-)

4) Face it, it's not that trivial to make one, at least if you are 
not willing to do a sloppy job. Sure, anybody can quickly make an 
ISO. But for this, we'll want to test it well, and make sure it 
really works out of the box. Also, it would be nice to provide as 
much convenience as possible (see 2).
In the IT business, you quickly learn that Quality Assurance can 
easily eat as much time as programming/design of the application :-)

5) To sell or not to sell - I say we should first worry about 
getting ISOs. Then people can use them or not. Next is we can 
research whether it's possible to make them available to users 
somehow. I am certainly *not* willing to take personal financial 
risks for this, though.

-- 
Chris Pepper:  http://www.reppep.com/~pepper/
Rockefeller University:   http://www.rockefeller.edu/

___
Fink-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/fink-devel