Re: [Fwd: avalon pkgsrc DragonFly 2.5.1/i386 2009-11-05 02:34]

2009-11-15 Thread justin
> jus...@shiningsilence.com wrote:
>> Another pkgsrc 2009Q3 build for i386 completed - build reports for
>> anyone
>> who wants to fix packages are at:
>>
>> http://avalon.dragonflybsd.org/reports//20091105.0234/
>>
>> I could use a system for 2.4.x builds - anyone have a machine with root
>> access and OK upstream bandwidth available?
>
> Can't we build it on avalon?

Yeah, and I already started doing that.  I like having separate machines,
though, since that way if an individual machine becomes unavailable (like
what just happened), overall package building doesn't stop.



Re: [Fwd: avalon pkgsrc DragonFly 2.5.1/i386 2009-11-05 02:34]

2009-11-15 Thread Simon 'corecode' Schubert

jus...@shiningsilence.com wrote:

Another pkgsrc 2009Q3 build for i386 completed - build reports for anyone
who wants to fix packages are at:

http://avalon.dragonflybsd.org/reports//20091105.0234/

I could use a system for 2.4.x builds - anyone have a machine with root
access and OK upstream bandwidth available?


Can't we build it on avalon?

cheers
  simon


Re: HAMMER in real life

2009-11-15 Thread Matthew Dillon

:Matt used to use hardlinks for some sort of historical arrangement; after
:a certain point, the total number of hardlinks was too much to handle.  He
:might have mentioned this somewhere in the archives.  I don't know if this
:would bite you the same way with gmirror.
:

Here's a quick summary:

* First, a filesystem like UFS (and I think UFS2 but I'm not sure)
  is limited to 65536 hardlinks per inode.  This limit is quickly
  reached when something like a CVS archive (which itself uses hardlinks
  in the CVS/ subdirectories) is backed up using the hardlink model.
  This results in a lot of data duplication and wasted storage.

* Since directories cannot be hardlinked, directories are always
  duplicated for each backup.  For UFS this is a disaster because
  fsck's memory use is partially based on the number of directories.

* UFS's fsck can't handle large numbers of inodes.  Once you get
  past a few tens of millions of inodes fsck explodes, not to mention
  can take 9+ hours to run even if it does not explode.  This happened
  to me several times during the days where I used UFS to hold archival
  data and for backups.  Everything worked dandy until I actually had
  to fsck.

  Even though things like background fsck exist, it's never been stable
  enough to be practical in a production environment, and even if it were
  it eats disk bandwidth potentially for days after a crash.  I don't
  know if that has changed recently or not.

  The only work around is to not store tens of millions of inodes on a
  UFS filesystem.

* I believe that FreeBSD was talking about adopting some of the LFS work,
  or otherwise implementing log space for UFS.  I don't know what the
  state of this is but I will say that it's tough to get something like
  this to work right without a lot of actual plug-pulling tests.

Either OpenBSD or NetBSD I believe have a log structured extension to
UFS which works.  Not sure which, sorry.

With something like ZFS one would use ZFS's snapshots (though they aren't
as fine-grained as HAMMER snapshots).  ZFS's snapshots work fairly well
but have higher maintanance overheads then HAMMER snapshots when one is
trying to delete a snapshot.  HAMMER can delete several snapshots in a
single pass so the aggregate maintainance overhead is lower.

With Linux... well, I don't know which filesystem you'd use.  ext4 maybe,
if they've fixed the bugs.  I've used reiser in the past (but obviously
that isn't desireable now).

--

For HAMMER, both Justin and I have been able to fill up multi-terrabyte
filesystems running bulk pkgsrc builds with default setups.  It's fairly
easy to fix by adjusting up the HAMMER config (aka hammer viconfig
) run times for pruning and reblocking.

Bulk builds are a bit of a special case.  Due to the way they work a
bulk build rm -rf's /usr/pkg for EACH package it builds, then
reconstructs it by installing the necessary dependencies previously
created before building the next package.  This eats disk space like
crazy on a normal HAMMER mount.  It's more managable if one did a
'nohistory' HAMMER mount but my preference, in general, is to use a
normal mount.

HAMMER does not implement redundancy like ZFS, so if redundancy is
needed you'd need to use a RAID card.  For backup systems I typically
don't bother with per-filesystem redundancy since I have several copies
on different machines already.  Not only do the (HAMMER) production
machines have somewhere around 60 days worth of snapshtos on them,
but my on-site backup box has 100 days of daily snapshots and my
off-site backup box has almost 2 years of weekly snapshots.

So if the backups fit on one or two drives additional redundancy isn't
really beneficial.  More then that and you'd definitely want RAID.

-Matt
Matthew Dillon 




Re: problem installing OpenOffice

2009-11-15 Thread justin
> On Saturday 14 November 2009 20:32:42 jus...@shiningsilence.com wrote:
>> Here's my guesses: firefox didn't install because it needs a newer
>> version
>> of sqlite3, which pkgin itself needs and you were upgrading using pkgin.
>> (And pkg_radd won't replace existing packages.)
>
> Sounds sensible. sqlite is ver. 3.6.17. How do I upgrade it?

There's a number of choices.  Update your /usr/pkgsrc, and:

- pkg_rolling-replace will rebuild dependencies from source in the best
order possible.  It'll take a while, and the system will be tied up during
that time as packages are added/removed.

- pkg_delete -f sqlite3 and pkg_radd sqlite3.  This will be fastest. 
Assuming there's no binary incompatbilites between the version you have
now and the new version, everything that depends on it should continue
working.  (Normally pkg_delete wouldn't let you delete a package other
packages need, but the -f will make it happen.)  If you do 'pkg_delete
sqlite3' (with no -f) and look at the packages it lists as dependent,
those packages could be upgraded too.  Of course, if there's other
packages dependent on those packages too, the problem gets bigger.

- Normally I'd say 'use pkgin to upgrade binaries' but I think the fact
that pkgin is dependent on the one package you are trying to upgrade is
the cause of the problem.  There may be a 'force' option to pkgin that
will do it.

What I'd do: Look for a way to force the upgrade with pkgin.  Failing
that, you can pkg_delete and pkg_radd sqlite3.  I'm 95% percent sure you
can do this without any issues.  If you positively cannot take any chances
that this system has downtime, pkg_rolling-replace.

There may be other options I haven't thought of; there's a lot of ways to
tackle upgrading within pkgsrc.