Re: Printer Daemon (Update)

2010-11-23 Thread Garance A Drosihn

On 11/22/10 9:41 PM, Tim Darby wrote:
> Here's the output:
>
> srwxrwx---  1 root  wheel  0 Nov 21 23:14 /var/run/printer
> uid=1001(tim) gid=1001(tim) groups=1001(tim)
>
> and this is the error I'm seeing:
>
>>  /usr/bin/lpr -PHPLaser /home/tim/test
> lpr: Unable to connect to /var/run/printer: Permission denied
> lpr: Check to see if the master 'lpd' process is running.
> jobs queued, but cannot start daemon.
>

What is the output from:   ls -ld /var/run
And maybe: ls -ld /var
although I'd be pretty surprised if /var was the problem.

Seems strange that several people are seeing this.  I can't
imagine what's going on, so all I can do is make some wild
guesses at it.

--
Garance
g...@freebsd.org  (lpr/lpd guy for FreeBSD)


Re: rsync vs. cvsup benchmarks

2008-01-30 Thread Garance A Drosihn

At 4:38 PM -0600 1/30/08, Vincent Stemen wrote:


That's a good point.  It is possible that cvsup would fair better with
a matching sup directory.  I actually forgot about cvsup keeping that
separate state directory when I ran the benchmarks.  However, from my
viewpoint that does not invalidate the test results or convince me that
there is any reason I would want to use cvsup for mirroring because of
several reasons.




  2 Rsync did not have the benefit of a local state directory either, so
it was a one on one fair comparison.  Based on all the cvsup claims,
I would have expected it to at least come close to matching rsync's
performance.  Then I would expect a higher possibility of it being
faster than rsync with the state directory available.


Geez.  Just use rsync if you want to use it.
   But the above reasoning is absurd.

"CVSUP must live up to it's performance claims, even though I refuse to
run CVSUP the way it is designed to run".

"Rsync does not use this method to optimize its performance, so I refuse
to let CVSUP use this method to optimize its performance.  And look,
CVSUP is slower than rsync as long as I make sure cvsup cannot use the
performance enhancements which were designed into it!"

Just use rsync, and shut up about it already.  No one is asking you to
use cvsup.  But stop trying to defend a obviously incomplete benchmark
by pulling out such bizarre reasoning.  If you don't want to do a real
benchmark, then just don't bother doing one.  I can't blame you for that,
as I also don't want to do the amount of work it would take to do a
really useful benchmark.

--
Garance Alistair Drosehn=   [EMAIL PROTECTED]
Senior Systems Programmer   or  [EMAIL PROTECTED]
Rensselaer Polytechnic Instituteor  [EMAIL PROTECTED]


Re: rsync vs. cvsup benchmarks

2008-01-30 Thread Garance A Drosihn

At 6:38 AM + 1/30/08, Vincent Stemen wrote:


My conclusions:
===

The results are dramatic, with rsync performing hundreds of percent faster on
average while only loading the processor on the client side a little over
a third as much as cvsup.  Either the performance claims about cvsup being
faster than rsync are based on theory without real world testing or cvsup has
gotten a lot slower or rsync has gotten a lot faster than in the past.

For those who are concerned about the validity of these results without
including server side load tests and tests under bandwidth congestion
conditions, here are my thoughts on the matter.

No matter where a bottleneck exists in the transfer, whether it
is server side load, client side load, or bandwidth limits, you
are going to experience similar loss of throughput.


The additional testing is nice to see, but you're not thinking the
issues through far enough when it comes to scaling up a service like
this.  It's good to benchmark something which hasn't been tested in
some time, but you have to do pretty extensive benchmarks if you're
going to come to any sweeping conclusions for *all* uses of a program.

Let's say rsync takes 10% of the cpu on the client, and 10% of the
cpu on a server.  Let's say cvsup for the same update takes 15% CPU
on the client, and 7% on the server.  If your benchmarks ignore the
load on the server, then they can not possibly see problems which
could occur when scaling up to more clients.

With a single client connecting to the server, *neither* side is the
bottleneck.  It might be the disk-speed is the main bottleneck at that
point.  The update might take longer with cvsup due to the 15% CPU on
the client, but the CPU isn't much of a *bottleneck* at that point.

But with 10 connections in my fake scenario, rsync could be using 100%
of the CPU on the server.  It's at this point that rsync will see some
bottleneck, while cvsup would only be using 70% of a CPU.  Yes, cvsup
will be using much more on each client, but then each client shows up
with it's own CPU(s) to take up whatever load is thrown at that client.
The server does not receive additional CPU's or network-cards for
each connection that it accepts.

Again, my feeling is that rsync is almost certainly fine for using
with dragonfly's repository, given how much faster machines and
networks have gotten, and how many simultaneous connections are seen
for dragonfly repo-servers.

It makes plenty of sense to stick with rsync if your servers are not
overloaded.  But if you want to prove rsync is better than cvsup for
what the loads that cvsup was *MEANT* to solve, then your tests are
not extensive enough.  Benchmarking a client/server setup like cvsup
is a lot of work to get a complete picture.

Also note that you don't need to "prove" that rsync is "better".  If it
is good-enough for what Dragonfly needs at this point, then use it.  It
would be the people running the servers who will care about the load
there, and if you have enough servers then dragonfly may never notice
any problems from using rsync.  Maybe a dragonfly repo-server will
never see more than 10 simultaneous connections, so you'll never even
hit the situations that freebsd had to deal with when cvsup was written.
(I suspect there's a lot fewer people on dial-up connections now, for
instance, so each individual connection will take a lot less wall-clock
time than it used to, so you're much less likely to see 100 simultaneous
connections).

--
Garance Alistair Drosehn=   [EMAIL PROTECTED]
Senior Systems Programmer   or  [EMAIL PROTECTED]
Rensselaer Polytechnic Instituteor  [EMAIL PROTECTED]


Re: cvsup

2008-01-19 Thread Garance A Drosihn

At 8:24 AM + 1/19/08, Vincent Stemen wrote:


Also, if nobody has written one or is working on one, I am considering
writing a script to provide basic cvsup like features/functionality
for repository updates via rsync.


You might want to wait a bit.  In freebsd-hackers, there's a thread on
the progress of adding cvsmode support to 'csup' (the re-write of cvsup
in C).  It is actively being worked on and tested.  Of course, 'csup'
probably needs some benchmarks done on it, too.

--
Garance Alistair Drosehn=   [EMAIL PROTECTED]
Senior Systems Programmer   or  [EMAIL PROTECTED]
Rensselaer Polytechnic Instituteor  [EMAIL PROTECTED]


Re: cvsup

2008-01-18 Thread Garance A Drosihn

At 9:16 AM + 1/18/08, Vincent Stemen wrote:


I realize that everything I read comparing cvsup to rsync indicates that
cvsup is faster with mirroring cvs repositories.  So I decided to run my
own tests this evening.  I thought everybody might be interested in the
results.

My results are not even close to what others are claiming.  Rsync was
vastly faster.  Granted, so far as I know, this was not right after
a large number of files have been tagged, but as you mentioned, that
does not happen very often.  If anybody wants to email me after that
does happen, I will try to make time to re-run the tests.


This is a very inadequate benchmark.  Certainly rsync works very well,
and the dragonfly repository's have enough capacity that they can
handle whatever the load is.  So, I realize that it is perfectly fine
to use rsync if that's what works for you.  And I realize that there
is the (unfortunate) headache due to needing modula-3 when it comes to
CVSUP.  So, I'm not saying anyone has to use cvsup, and I am sure that
rsync will get the job done.  I'm just saying that this specific
benchmark is so limited that it is meaningless.

What was the load on the server?  How well does rsync scale when there
are thousands of people updating at the same time?  (in particular, how
well does the *server* handle that?).

How big of an update-interval were you testing with?  If I'm reading
your message right, the largest interval you tested was 2-days-worth
of updates.  For most larger open-source projects, many end-users are
going at least a week between sync's, and many of my friends update
their copy of the freebsd repository once every three weeks.  Some
update their copy only two or three times a year, or after some
significant security update is announced.  Note that this means the
server sees a huge spike right after security updates, because there
are connections from people who haven't sync'ed in months, and who
probably would not have sync'ed for a few more months if it wasn't
for the security update.

Tags occur rarely, but they do occur.  And in the case of dragonfly,
there are also the sliding tags that Matt likes to use.  So while he
doesn't create a new tag very often, he does move the tag in a group
of files.  (Admittedly, I have no clue as to how well cvsup does
with a moved tag, but it would be worthwhile to know when it comes
to benchmarking rsync-vs-cvsup for dragonfly.  It is quite possible
that cvsup will actually get confused by a moved-tag, and thus not
be able to optimize the transfer of those files)

The shorter the update-interval, the less likely that all the
CVS-specific optimizing code in cvsup will do any good.  Note, for
instance:

For a 1.5 hour old repository:
rsync total time: 34.846
cvsup total time: 3:40.77
=
cvsup took 6.33 times as long as rsync

For a 2 day old repository:
rsync total time: 2:03.07
cvsup total time: 9:14.73
=
cvsup took 4.5 times as long as rsync

Even with just two data points, we see that larger the window, the
less-well that rsync does compared to cvsup.

In that 1.5 hour old repository, how many files were changed?  10?
100?  If there are only 100 files to do *anything* with, then there
isn't much for cvsup to optimize on.  It's pretty likely that rsync
is going to be faster than cvsup at "sync"ing a file which has zero
changes which need to be sync'ed.

If you have users who are regularly sync-ing their repository
every 1.5 hours, 24 hours a day, 7 days a week, then there are some
cvsup servers which would block that user's IP address for being such
an annoying pest.  The only people who need to sync *that* often are
people who themselves are running mirrors of the repository.  For all
other users, syncing that often is an undesirable and unwanted load
on the server.  The people running a sync-server wouldn't want to
optimize for behavior patterns which they don't want to see in the
first place.

I would say the *smallest* window that you should bother testing is
an six-hour window (which would be four updates per day), and that
the most interesting window to test would be a 1-week window.

It took more than a year to write cvsup, by someone who was working
basically full-time at it.  (that's what he told me, at least!  :-)
He wouldn't have put in all that work if there was no point to it,
and he would have based his work on a wide range of usage patterns.


Unless I am overlooking something obvious, I think I am going to
stick with updating our repository via rsync :-).


As I said earlier, rsync is certainly a reasonable solution.  I'm just
commenting on the "benchmark".  And I realize I haven't done *any*
benchmarks, so I can't claim much of anything either.  But you would
need a much more elaborate and tedious set of benchmarks before you
could draw any significant conclusions.

--
Garance Alistair Drosehn=   [EMAIL PROTECTED]
Senior 

Re: bsdstats.org

2006-09-11 Thread Garance A Drosihn

At 9:06 PM +0200 9/11/06, Simon 'corecode' Schubert wrote:

Garance A Drosihn wrote:

I'm content with the post-install "Would you like to submit your
information" question.  I don't know why on Earth this script has
to be put in periodic though.  What am I missing?

The idea is to track how many machines are actively running *BSD.
If all we know is that you installed *BSD last month, then we have
no idea if you're still running it this month.


actually, these numbers won't be accurate.  Just take a office
machine running *bsd:  periodic won't run if the box is shut down
every evening.  i think there is a need for some sophistication.


I'm sure we'll find many good ideas which need to be included in
the stats script, as we get more experience with it.  I expect that
Marc will be busy writing updates for many months to come...  :-)

--
Garance Alistair Drosehn=   [EMAIL PROTECTED]
Senior Systems Programmer   or  [EMAIL PROTECTED]
Rensselaer Polytechnic Instituteor  [EMAIL PROTECTED]


Re: bsdstats.org

2006-09-11 Thread Garance A Drosihn

At 1:18 PM -0400 9/11/06, Adrian Michael Nida wrote:


IIRC, OpenBSD still has their [EMAIL PROTECTED] mail account to
collect hardware information.  NYCBUG also has something similar:

http://www.nycbug.org/index.php?NAV=dmesgd;SQLIMIT=20

I'm content with the post-install "Would you like to submit your
information" question.  I don't know why on Earth this script has
to be put in periodic though.  What am I missing?


The idea is to track how many machines are actively running *BSD.
If all we know is that you installed *BSD last month, then we have
no idea if you're still running it this month.

--
Garance Alistair Drosehn=   [EMAIL PROTECTED]
Senior Systems Programmer   or  [EMAIL PROTECTED]
Rensselaer Polytechnic Instituteor  [EMAIL PROTECTED]


Re: Not sure how to do this tricky install...

2006-08-19 Thread Garance A Drosihn

At 9:31 AM -0700 8/18/06, Matthew Dillon wrote:


In fact, it is my thinking that we should adjust the
default sshd_config to set the PermitRootLogin field
to 'without-password' by default, since it won't do
anything unless the SSH keys are also installed in
root's authorized_keys file.


Sounds like a good idea.  The option should have been called
something else though.  One of my co-workers saw

   PermitRootLogin without-password

in some of the config files I set up, and nearly had a
panic-attack!  Some phrase more like "only-by-keys".  Or
at least include a comment in the sshd_config file which
makes that clear to admins who aren't used to seeing that
option.

--
Garance Alistair Drosehn=   [EMAIL PROTECTED]
Senior Systems Programmer   or  [EMAIL PROTECTED]
Rensselaer Polytechnic Instituteor  [EMAIL PROTECTED]


Re: Ruby on Rails and pkgsrc

2006-03-20 Thread Garance A Drosihn

At 8:19 AM -0800 3/20/06, Jeremy C. Reed wrote:

On Mon, 20 Mar 2006, Garance A Drosihn wrote:


 Among other things that gems provides is a way to handle
 multiple versions of the same package installed at the
 same time.  Ruby scripts can specify which versions of
 some package that they will work with, and the same OS
 can easily support different scripts which "must have"
 incompatible versions of a given gem.


Just want to mention that pkgsrc provides a feature like
this also also --  where multiple versions of same package
can be installed at same time.  It is not well tested (and
not all packages have been checked), but I have used it
some myself.


I'm not saying anyone should abandon pkgsrc for managing
other packages, since the vast vast majority of packages
have absolutely nothing to do with ruby.  But I would not
abandon ruby-gems only to spend a lot of time and effort
trying to re-package every single gem in some other form.
That is a lot of duplicated effort on an ongoing basis,
and I'm sure that much effort would see much higher rewards
if spent on other areas.

--
Garance Alistair Drosehn=   [EMAIL PROTECTED]
Senior Systems Programmer   or  [EMAIL PROTECTED]
Rensselaer Polytechnic Instituteor  [EMAIL PROTECTED]


Re: Ruby on Rails and pkgsrc

2006-03-19 Thread Garance A Drosihn

At 7:27 AM -0800 3/18/06, Jeremy C. Reed wrote:

 > > Im planning to learn Ruby, and I want to setup Apache2,
 > > MySQL and Ruby on Rails, but I cannot find a rails port
 > > in pkgsrc. Is there one actually?
 >

 Just install misc/rubygems (ruby's own component management
 infrastructure) from pkgsrc and then do a "gem install rails".


Does this provide a way to uninstall? And does this provide
a way to  manage the installed files (such as pkg_info,
pkg_delete)?

Anyone interested in packaging this?


The gems package-management system provides features that other
OS-based package-management systems do not provide.  That's
mainly because developers in the ruby community expect to use
gems to distribute their package.  Thus, the *original* developer
will create and support the gem for their package, instead of
having some "ports-team developer" constantly chasing after
a package, and re-working the build environment so that package
will build on the target OS.  OS-specific ports teams are
*constantly* playing catch-up with whatever the original
developers are releasing.

Among other things that gems provides is a way to handle
multiple versions of the same package installed at the
same time.  Ruby scripts can specify which versions of
some package that they will work with, and the same OS
can easily support different scripts which "must have"
incompatible versions of a given gem.

--
Garance Alistair Drosehn=   [EMAIL PROTECTED]
Senior Systems Programmer   or  [EMAIL PROTECTED]
Rensselaer Polytechnic Instituteor  [EMAIL PROTECTED]


Re: csup import?

2006-03-06 Thread Garance A Drosihn

At 11:34 AM +0800 3/7/06, Jiawei Ye wrote:

Hi fellow DF users,

CSup is a cvsup replacement written in C
 ( http://mu.org/~mux/csup.html ).
This was imported in to FreeBSD HEAD a few days ago and so far
has proved to work in checkout mode. Do we have the incentive
to import this into our  system so that we don't need cvsup-bin?

There is a minor compilation error when I tried to build the
latest snapshot from the site on DF-Preview

cc -O ... -c globtree.c
In file included from globtree.c:31:
/usr/include/regex.h:47: error: syntax error before "regoff_t"
/usr/include/regex.h:47: warning: data definition has no type or storage
class


In glancing at this, and comparing /usr/include/regex.h in
FreeBSD vs it in Dragonfly, my guess is that in globtree.c
you need to have:

#include 

before

#include 

However, I also notice that in the most-recent version of
globtree.h in the FreeBSD base system,  *is*
included before .  The problem is that the snapshot
you have is a few days old.  If you check:

http://www.freebsd.org/cgi/cvsweb.cgi/projects/csup/globtree.c.diff?r1=1.3&r2=1.4&f=h

you'll see that Maxime fixed this same bug just a few days
ago.  In his case, he had to fix it to get 'csup' to compile
on FreeBSD 4.x.  So it isn't surprising that the same issue
has come up for Dragonfly.


I'd appreciated if anyone would help getting this into our
base system.


You might want to wait a little longer (maybe just a few
weeks), as Maxime is still fixing minor bugs in this new
C-based implementation.

Also note that 'csup' is not a *complete* replacement for
cvsup, at least not yet.  So you will probably still need
to keep cvsup-bin around as an option for people.  It may
grow into a complete replacement as time goes on.

Certainly I do think it would be nice to have 'csup' pick up
the duties of the 'cvsup' client wherever it can be used...

--
Garance Alistair Drosehn=   [EMAIL PROTECTED]
Senior Systems Programmer   or  [EMAIL PROTECTED]
Rensselaer Polytechnic Instituteor  [EMAIL PROTECTED]


Re: Subversion for DF sources

2006-01-12 Thread Garance A Drosihn

At 3:43 PM +1100 1/12/06, Nigel Weeks wrote:

A week or so ago, a discussion rattled around about cvsup
being written in c++.


Some comments on this part of your message:  FreeBSD already has
a side project to rewrite cvsup in C (plain C, not C++).  Right
now this is not a full featured replacement, but it works.  It is
called 'csup', and is in the freebsd projects repo (which is a
separate repository from the base-system repo, and separate from
freebsd-ports).

For my own purposes, 'csup' is not a complete-enough replacement
for 'CVSup', but I expect it's a much better starting point than
several thousand lines of modula-3 and an empty C++ source file.

At the *exact* same time that the 'csup' project was first being
worked on, it happens that I was paying an RPI grad student to do
a rewrite of CVSup, and that work was in C++ (because that's the
language he had the most programming experience with).  Once we
found out that two projects were going on, we kinda compared notes
and at that time it seemed like the 'csup' project was farther
along than mine, so I cancelled mine.  I could probably dig up
that code if you're really keen on a C++ rewrite, but I'm not sure
how much of a help it would be.  (I am not a C++ programmer...)


What about launching into it, and moving to subversion instead?
That way, the updater could be included in the base system, as
it's c++ AND BSD licenced.

Probably great scads or work, but it might be a good idea.


I like subversion.  Other developers have become attached to other
SCM systems.  Everyone hates CVS, but they all know CVS so it's
often easier to stick with CVS than to fight out what should
replace it...  Everyone knows CVS sucks, but they also know it has
a long reliable track record, and they know how to "live with" the
parts of CVS where it sucks.  More and more significant projects
(such as samba) are switching to subversion, but it still doesn't
have quite the track record that CVS has.

I'm a little confused with your comment of "What about launching
into it, and using subversion instead?".  If you want to launch into
a rewrite of CVSup, then why would you *also* switch to subversion?
Did you mean:  "What about *NOT* launching into a CVSup rewrite, and
moving to subversion instead?"

There is also an OpenCVS being worked on from members of the OpenBSD
group, which would give you CVS with a BSD license.  I know they plan
to be "completely CVS compatible -- plus important new features!",
but I have no idea if this would give you a CVSup-like ability.  And
obviously, this doesn't actually exist as a as a usable alternative
yet.  Of course, if you're going to write something from scratch, than
what you have doesn't exist yet either, so you can't really claim that
as an advantage over OpenCVS...

--
Garance Alistair Drosehn=   [EMAIL PROTECTED]
Senior Systems Programmer   or  [EMAIL PROTECTED]
Rensselaer Polytechnic Instituteor  [EMAIL PROTECTED]


Re: DP performance

2005-12-02 Thread Garance A Drosihn

At 6:48 PM -0800 12/1/05, Danial Thom wrote:

--- Matthew Dillon <[EMAIL PROTECTED]>
wrote:

 > :  [various observations based on years of
 > :  real-world experience, as anyone could
 > :  find out via a competent google search]


..., and you also obviously have no practical
experience with heavily utilized network devices,
because you seem to have no grasp on the real issues.


A-hahahahahahahahahahahaha.  Thanks.  It's always
nice when I can start my day by reading such a
funny line.

...I suspect a troll is present in these here woods.

--
Garance Alistair Drosehn=   [EMAIL PROTECTED]
Senior Systems Programmer   or  [EMAIL PROTECTED]
Rensselaer Polytechnic Instituteor  [EMAIL PROTECTED]


Re: Compatability with FreeBSD Ports [debian package tools]

2005-08-17 Thread Garance A Drosihn

At 6:47 PM +0200 8/17/05, Michel Talon wrote:

Hiten Pandya wrote:


Extremely important to get binary package management right,
including dependency handling, (automatic) updating.



If it was not for these softs, and some other softs like Gnome,
which are constantly broken for any reason, i have the impression
(illusion?) that portupgrade -PP would do the job perfectly.


I have had very good luck with portupgrade, on multiple freebsd
systems on multiple platforms.  I do avoid the biggies like KDE
or Gnome, which obviously helps.

That said, I do not quite trust 'portupgrade -PP'.  Portupgrade
is a great tool, but (IMO) you can't use it blindly.  What I do
is 'pkg_version -vL =' to see what packages need to be updated.
I then pick some subset of packages which *I* think are the most
important, and I try 'portupgrade -Rr -n' with them.  If they
pull in some other package, and I don't have any objection to
that other package, then I'll redo the 'portupgrade -Rr -n' with
the extra package.  Once I get a list where the packages I specify
on 'portupgrade -Rr -n' match the packages which it claims it will
build, *then* I do 'portupgrade -Rr' without the '-n'.

In some cases I'll wait to upgrade some packages, because I've
noticed email from other users who have had trouble with those
packages.  Note that this is not a problem with *portupgrade*,
it's a problem with the original package(s).  Portupgrade won't
save you if the port itself is wrong in some subtle manner.

What it boils down to is that *none* of these package-tracking
systems are completely reliable.  This thread is going to go on
forever if everyone keeps "voting no" on some package-system
because of problems they've seen with it.  Pick something, and
go with it, and just admit to yourself that "this sucks, but at
least it's something we can do right now".

--
Garance Alistair Drosehn=   [EMAIL PROTECTED]
Senior Systems Programmer   or  [EMAIL PROTECTED]
Rensselaer Polytechnic Instituteor  [EMAIL PROTECTED]


Re: HEADS UP ON -DEVELOPMENT

2005-08-05 Thread Garance A Drosihn

At 9:23 AM -0700 8/5/05, Matthew Dillon wrote:

Joerg and I have discussed the struct stat and struct dirent
changes.  So far the stat work has been committed and the dirent
work is still in progress.  This is work to increase ino_t from
32 to 64 bits and nlink_t from 16 to 32 bits.


How about dev_t to 64 bits?  For "worldwide" distributed filesystems
(such as OpenAFS), this would be a very nice thing.


At 5:42 PM +0100 8/5/05, Hiten Pandya wrote:

What about time related fields, are they currently 64-bits wide?


We are stuck with various time standards so there isn't much
we can do with the time fields.  The standards idiots didn't
fix the seconds  field when they changed the microseconds
field to nanoseconds.


One thing I considered is coming up with a "struct_time_t" macro.
This could be used to at least *reserve* 64-bit areas in a struct
for any struct where a time_t value is used.  That way, if you
later want to have a 64-bit time_t, you'll have the room reserved
for it.  I'm hoping to get something like this together for
FreeBSD 7.x, when I'm in a particularly optimistic mood...

Something along the lines of:

#include 
#include 

/* This uses specific sizes, and then some other include file
 * would set the real time_t to either time32_t or time64_t. */
typedef int32_t time32_t;
typedef int64_t time64_t;

#if _BYTE_ORDER == _LITTLE_ENDIAN
#define STRUCT_TIME_T(vname) \
union __aligned(8) { \
time64_t__CONCAT(vname,_64); \
struct { \
time32_tvname; \
int32_t __CONCAT(vname,_h32); \
}; \
}
#elif _BYTE_ORDER == _BIG_ENDIAN
#define STRUCT_TIME_T(vname) \
union __aligned(8) { \
time64_t__CONCAT(vname,_64); \
struct { \
int32_t __CONCAT(vname,_h32); \
time32_tvname; \
}; \
}
#endif

(which I have done some limited testing with, and it seems to do
what I want it to do).  You would use it like:

struct test_stat {
...
dev_t st_rdev;  /* device type */
STRUCT_TIME_T(st_atime);/* time of last access */
long  st_atimensec; /* nsec of last access */
STRUCT_TIME_T(st_mtime);/* time of last data modification */
long  st_mtimensec; /* nsec of last data modification */
STRUCT_TIME_T(st_ctime);/* time of last file status change */
long  st_ctimensec; /* nsec of last file status change */
char abyte;
};

--
Garance Alistair Drosehn=   [EMAIL PROTECTED]
Senior Systems Programmer   or  [EMAIL PROTECTED]
Rensselaer Polytechnic Instituteor  [EMAIL PROTECTED]