[boost] Re: BOOST_STATIC_WARNING ?

2003-06-20 Thread Daryle Walker
On Wednesday, June 18, 2003, at 9:59 PM, David Abrahams wrote:

Daryle Walker [EMAIL PROTECTED] writes:

My point was that warnings are non-portable constructions made up 
by compiler makers.
So are the semantics of #include.  That doesn't mean we can't count 
on certain similarities (though they may be hard to find).
Actually, the semantics of #include aren't that made up; they are
constrained by standard.
Slightly.  They are still non-portable constructions made up by
compiler makers.
As I understand it, the #include directive dumps the contents of a file 
found from the standard () and/or user () header space.  The only 
degrees of variance is how the mapping of header names to files/sources 
occurs and how the standard headers are handled.  Equating the 
implementation-defined parts of #include to the warning concept, which 
has no official standing, is a gross misrepresentation.

We rely on what we can count on in practice.
The difference between the theory and practice of #include can't be 
that bad, can it?  Especially compared to some other C++ issues (e.g. 
export).

In contrast, a compiler doesn't even have to have warnings,
In practice they all do.
That doesn't mean that the warnings are effective.  Or the user may 
have them deactivated.

let alone define them in an easy-to-exploit manner or with any
similarity to other compilers.
Whether they do in practice remains to be seen.

I don't want to see a big effort (i.e. a long #if/#elif chain from 
heck with subtle details and could break at the next release of any 
compiler) on something that is inherently non-portable.
How about a small effort?  There aren't really all that many
front-ends.
The amount doesn't matter; it's more of a philosophical objection.

A question towards the original presenter: why would you need a such a 
macro?  Constructs that produce a warning will do so without the macro. 
 It could be needed if:

1.  You wrote conforming code,
2.  And the code passed the compiler's warning filters,
3.  But you want it flagged as a warning anyway
Why would you write such code?  If the code passed by the warnings, it 
may be good code.  If you feel it's yucky code, then write a better 
alternative, or accept that your code actually is good enough.  
Messages that are there just to broadcast loathing of your own code 
would be distracting to the user, especially if the user can't fix the 
code.  (An user may strive to have no warnings, but can't because your 
code is in permanent warning mode.  The user could hack the header, but 
what would be the point of presenting the header if the user had to 
finish it?)

Daryle

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


[boost] Re: Novell NetWare support (filesystem), patch

2003-06-20 Thread Daryle Walker
On Thursday, June 19, 2003, at 10:29 AM, Beman Dawes wrote:

[SNIP]
Because the platform is apparently so similar to POSIX and/or Windows, 
I'd prefer not to treat it as a distinct platform. Rather, I'd like to 
treat BOOST_NETWARE as a variation on BOOST_POSIX and/or  BOOST_WINDOWS.
[TRUNCATE]

Aren't there non-Windows and non-POSIX platforms that support NetWare?  
(I think pre-X Mac OS is one.)  Also, would this idea hide differences 
that matter?  (Something that is different shouldn't pretend not to be.)

Daryle

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


[boost] Re: Novell NetWare support (filesystem), patch

2003-06-20 Thread Petr Ovchenkov
Beman Dawes wrote:

 Reading the patch, I see one or two specific differences from POSIX or
 Windows, but basically operational functions are treaded as if on a POSIX
 platform, while paths are treated as if on Windows.
 
 Does that mean the Windows API is not available? Or was there some other
 reason for not choosing the Windows API for operational functions?
 
 (Mixing POSIX operational functions with Windows paths wouldn't be my
 first choice as I'm afraid of subtle bugs in hard to anticipate corner
 cases. That's why the current implementation doesn't just use the POSIX
 functions, even when available on Windows.)

Hi Beman,

I am not expert in Novell too... Unfortunately, Novell experts that I
contacted not work with C++ at all...

From development point of view Novell NetWare has two parts: programming
client application (application run on Windows [or *nix] computer and
connect to Novell NetWare Server, so such applications development will use
Windows API for Windows clients, etc.) and programming for NetWare server.
In my suggestions I focused on NetWare Server programming.

NetWare Server API is like mix of POSIX / Windows / original

  - filesystem is multiroot, volume label is a identifier, not single letter
as in Windows:

SYS:/WORKSHOP/XTESTER.NLM

Path is case insensitive, path delimiter may be as / or \

  - most operations under files has POSIX-like calls, but NetWare isn't
POSIX-compliant (but tendency is migrating to POSIX, as I see)

 
 Because the platform is apparently so similar to POSIX and/or Windows, I'd
 prefer not to treat it as a distinct platform. Rather, I'd like to treat
 BOOST_NETWARE as a variation on BOOST_POSIX and/or BOOST_WINDOWS.

It's mix: paths are like Windows variation, system calls more close to
POSIX.

Thanks for you efforts,

   - Petr Ovchenkov


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


[boost] Re: BOOST_STATIC_ASSERT - a little better

2003-06-20 Thread Pavel Vozenilek

John Torjo [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
 Hi all,

 I was just thinking (actually, I needed this, while doing some coding),
that
 STATIC_ASSERT could get a little of the SMART_ASSERT flavour.

 What am I talking about?
 In case a STATIC_ASSERT fails, how about dumping some data that was
involved
 in the expression?

SMART_ASSERT is very useful to provide context information for hard to
reproduce errors. STATIC_ASSERT error can be reproduced reliably during
every compilation.

/Pavel



___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


[boost] objects serializing (from comp.lang.c++.moderated)

2003-06-20 Thread Alexander Nasonov
Below is a copy of my post to comp.lang.c++.moderated
http://groups.google.co.uk/groups?q=author:alnsn-mycop%40yandex.ruhl=enlr=ie=UTF-8selm=3eef18d6%40news.fhg.dernum=1

--- cut ---

Thomas Hansen wrote:
 BTW!
 Serialization of objects in C++ or any other language for that reason
 is one of the hardest part to keep up with your OOP mantras...
 The reason is that all that nice polymorphisme and your beautifule
 inheritence tree basically becomes pure C since you'll end up with
 a file stuffed with enums and magic numbers anyway...
 No man alive today has been able to solve this problem completely...


I took some ideas from boost archive and played around with a code several 
days ago.


The idea is to set relations between pointers to data members (or to get/set 
member functions) and names of serialized fields.
Then, during object loading/saving it's possible to access the member using 
member pointer. The framework will find appropriate field in the storage by 
name and static_cast it automatically. So, you don't need to make those 
tricky casts.
Realtions can be built using describe function:

// Parent.hpp
struct Parent : Person
{
Parent* partner;
std::listPerson* children;

   static void describe(db::typeParent t)
   {
// Looks like class definition
t  db::class_(Parent)
   db::derived_fromPerson()
   db::field(Parent::partner, partner)
   db::field(Parent::children, children);
   }
};


It should be registered in Parent.cpp file using static object:

namespace {
  db::register_typeParent,Parent::describe r;
}


Access to members is easy:

templateclass Class, class T
T db::get(T Class::* p, db::raw_object obj);

db::raw_object father = find_my_father(parents_database);
Parent* mother = db::get(Parent::partner, father);

In addition to better static type control you can track pointers or a list 
of pointers at compile-time and then use it at runtime to set up the schema 
and to load an object with all its relations. 

--- cut ---

-- 
Alexander Nasonov
Remove minus and all between minus and at from my e-mail for timely response


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


RE: [boost] Experimental audience-targeted regression results

2003-06-20 Thread Misha Bergal
Oops, sorry

That was intended as a private e-mail to Aleksey.

My apologies to Peter.

Misha Bergal
MetaCommunications Engineering

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Comments on the bcp tool

2003-06-20 Thread John Maddock
   However, it seems to be confused by the preprocessor library.
   Since the
   includes sometime have the form:
  
  #include BOOST_PP_ITERATE()
  
   the 'bcp' tool does not find them. For example,
   boost/preprocessor/iteration/detail/iter directory is needed by
   boost/function.hpp but is not included.
 
  Is it overkill to use Wave for this? It would solve the
  mentioned problem by correctly preprocessing the inspected sources.

 Here is the (main) code, which uses Wave to output the file names of all
 successfully opened include files (this needs some filtering to avoid
 double output of the same file):

Interesting, the thing is I need the code to find all possible dependencies,
so that if we have:

#if SOME_MACRO
#include boost/a.hpp
#else
#include boost/b.hpp
#endif

then it should find *both* headers.

Which I don't think a preprocessor will do?

In any case I'm already using regex for other purposes, and boost::function
seems to be the only problematic case so far...

John.


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


[boost] Re: BOOST_STATIC_WARNING ?

2003-06-20 Thread Gennaro Prota
On Fri, 20 Jun 2003 00:49:42 -0400, Daryle Walker [EMAIL PROTECTED]
wrote:

On Wednesday, June 18, 2003, at 9:59 PM, David Abrahams wrote:

 Slightly.  They are still non-portable constructions made up by
 compiler makers.

As I understand it, the #include directive dumps the contents of a file 
found from the standard () and/or user () header space.  The only 
degrees of variance is how the mapping of header names to files/sources 
occurs and how the standard headers are handled.

Really?

http://www.google.com/groups?threadm=plfbev89vgf08s0o3848bjssetqtqn0at9%404ax.com

Also, I have resisted once but now you want to make me suffer :-)

From another post in this thread:

Warnings are completely non-portable, since:

1. They have no official standing in the standard, just errors do
^^

Really?

[...]
3. They are 100% legal code, the vendor just doesn't like it

Really?


Equating the 
implementation-defined parts of #include to the warning concept, which 
has no official standing, is a gross misrepresentation.

I don't think Dave was really equating. He was probably just saying
that features not exactly specified, or not specified at all, by the
standard can still be useful in practice.

Not that I'm for BOOST_STATIC_WARNING, just to avoid such erroneous
information about what is standard and what is not.


Genny.

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Comments on the bcp tool

2003-06-20 Thread John Maddock
 However, it seems to be confused by the preprocessor library. Since the
 includes sometime have the form:

#include BOOST_PP_ITERATE()

 the 'bcp' tool does not find them. For example,
 boost/preprocessor/iteration/detail/iter directory is needed by
 boost/function.hpp but is not included. I've added some explicit
dependencies
 with the attached patch --- can it be applied?

Thanks will fix - I've got a bunch of other really exciting changes comming
(licence scanning) - so the patch may have to wait a few days.

 Another note is on usability. Say I create directory po and find that
some
 files are missing. I tweak bcp source and try again. But attempt to
override
 files fail. I remove po directory. But then bcp says the destination
does
 not exist. It's a bit inconvenient --- maybe destination directory should
be
 created if it does not exist. Or maybe, there should be --overwrite
switch,
 which would simply clean up destination before doing copies.

Or maybe it should just go ahead and overwrite, should be easy to fix.

 And the last note:

bcp --boost=/home/ghost/Work/boost boost/function/function1.hpp
function1

 creates a tree of 1975 kbytes. Hmm, never though there's that many
 dependencies...

Hmm, it seems to pull in more of type_traits than I would have expected, and
that pulls in part of mpl, and then a whole load of the preprocessor
library.  I don't think any given compiler will include half of that, but
which half depends upon which compiler...

John


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Comments on the bcp tool

2003-06-20 Thread John Maddock
 Anyone got a Win32 exe of bcp that they could email me?

Eventually there probably will be one to download, but it's still developing
quite rapidly at present, I'll mail you a binary build though.

John.


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Package for Cygwin distribution

2003-06-20 Thread Reed Hedges
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Neal D. Becker Jun 19 2003 3:32PM
If you are starting out, why not use current cygwin?

Do you mean why not use GCC (3.2)?
Because 3.2 is buggy and I can't use it with my software.  And it would 
take me quite some time to download it on my slow dialup line. When 3.3 
is available, then I can use that.

If there is interest in real package for Boost, which is being 
considered for inclusion in the official release, then I would probably 
build it with 3.2.

reed
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.1 (Darwin)
iD8DBQE+8pTDFK83gN8ItOQRAjurAJ9qk8Z7G95ifwiRgim5xtSh+MF68wCfTfK4
+Oz4i8DNx10vjnSXjl39epM=
=Dtdq
-END PGP SIGNATURE-
___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


RE: [boost] Comments on the bcp tool

2003-06-20 Thread Hartmut Kaiser
John Maddock wrote:

  Here is the (main) code, which uses Wave to output the file 
 names of 
  all successfully opened include files (this needs some filtering to 
  avoid double output of the same file):
 
 Interesting, the thing is I need the code to find all 
 possible dependencies, so that if we have:
 
 #if SOME_MACRO
 #include boost/a.hpp
 #else
 #include boost/b.hpp
 #endif
 
 then it should find *both* headers.
 
 Which I don't think a preprocessor will do?

Good point!

 In any case I'm already using regex for other purposes, and 
 boost::function seems to be the only problematic case so far...

Anyway, nice to have bcp!
Regards Hartmut


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


[boost] Re: [mpl] workaround needed for Borland

2003-06-20 Thread David Abrahams
Eric Friedman [EMAIL PROTECTED] writes:

 Aleksey (and all),

 In working on porting boost::variant to Borland, I've come across some 
 trouble with a bug in the compiler.

 Specfically, I'm getting Cannot have both a template class and 
 function named 'bind1st' and similarly for bind2nd. I know other MPL 
 headers use BOOST_MPL_AUX_COMMON_NAME_WKND to work around this bogus 
 report.

 I'd apply the patch myself, but due to the heavy use of preprocessed 
 headers, I'm worried I won't get it completely right. So I'll leave it 
 up to Aleksey (or others) to fix.

AFAICT, Aleksey is the only one who knows how to make modifications to
MPL correctly in the context of its preprocessing system.  Aleksey, a
short README would totally solve this problem, wouldn't it?

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Advanced match constants scheme

2003-06-20 Thread Guillaume Melquiond
On Thu, 19 Jun 2003, Augustus Saunders wrote:

 PS I'd like to hear more views on this -
 previous review comments were quite different,
 being very cautious about an 'advanced' scheme like this.

I didn't react to this review at first because I was a bit disappointed by
the content of the library. It was more like some questions about the best
way to represent constants in a C++ library. And since I already had given
my thoughts about that, I didn't feel the need to speak about it again.

 Disclaimer: I am neither a mathemetician nor a scientist (I don't
 even play one one TV).  I do find the prospect of writing natural,
 effecient, and precise code for solving various equations a
 worthwhile goal.  So, since you asked for comments, here's my
 non-expert thoughts.

 As I understand it, the original proposal's goal was to provide
 conveniently accessible mathematical constants with precision greater
 than current hardware floating point units without any unwanted
 overhead and no conversion surprises.  Additionally, it needed to
 work with the interval library easily.  To work around some
 compilers' failure to remove unused constants or poor optimization,
 we wound up discussing function call and macro interfaces.  Nobody,
 however, is thrilled with polluting the global namespace, so unless
 Paul Mensonides convinces the world that macro namespaces are a good
 thing, some of us need convincing that macros are really the way to
 go.

I am not really interested in macros. I would prefer for the library to
only provide one kind of interface. There could then be other headers on
top of it to provide other interfaces to access the constants.

The standard interface should provide a way to access a constant at a
given precision and an enclosing interval of it. For example, this kind of
scheme would be enough for me: constantpi, double::lower(). I'm not
suggesting that such a notation should be adopted; it's just a way to show
what I consider important in a constant.

If a particular precision is not available, the library should be able to
infer it thanks to the value of the constant for other precisions. For
example, if the only available precisions are float and long double
for a particular architecture and/or constant, and if the user needs
double, the library should be able to do such conversions:

  constantpi, double::value() - constantpi, long double::value()
  constantpi, double::lower() - constantpi, float::lower()

Please note that for the value of a constant, a higher precision constant
can be used instead [*]; but for the lower and upper bound, it must be a
lower precision constant. So it is a bit more complicated than just
providing 40 digits constants.

It is the reason why I was rooting for a library specialized in constants.
It would provide an interface able to hide the conversion problems. The
library would have to know the underlying format of floating-point numbers
since the precision of the formats is not fixed (there are 80-bits and
128-bits long double for example).

The Interval library defines three constants: pi, 2*pi and pi/2. They are
needed in order to compute interval trigonometric functions. At the time
we designed the library, it was not easy task to correctly define these
constants. Here is the example of one of the 91 lines of the header that
defines them:

static const double pi_d_l = (3373259426.0 + 273688.0 / (121))
  / (130);

Using such a formula was (in our opinion) necessary in order for the
compiler to correctly deal with these constants. I would be happy to
remove such an header and use another library instead.

 In the course of discussion, a more ambitions plan was proposed.
 Instead of just providing a big list of constants, IIUC it was
 suggested that an expression template library be used to allow common
 constant combinations like 2*pi or pi/2 to be expressed with normal
 operators.  This seems good, it provides a natural syntax and reduces
 namespace clutter and is easier to remember.  However, since the idea
 was to use a special math program to generate high precision
 constants, I'm not sure whether an ETL can eliminate the need to
 compute things like 2*pi with the third party program.  So I'd like
 to know:

 does

 1) 2*pi == BOOST_2_PI

 where BOOST_2_PI is a constant already defined, or does

 2) 2*pi == BOOST_PP_MULT( 2, BOOST_PI )

 using high precision preprocessor math (or something) to sidestep the
 need for defining BOOST_2_PI in the first place?

 If this was implemented the first way, then I would see any
 advanced scheme as being a layer on top of the actual constant
 library, to give it a more convenient interface.  The second way
 might actually impact what constants get defined in the first place,
 in which case we should talk it out enough to know what constants
 should be defined.  But I'm not sure the possibility of an advanced
 scheme should prevent us from defining the basic constants--an
 expression framework could be another library, 

RE: [boost] Re: Re: Math Constants Formal Review - usingnamespacestoselectfloat size is simpler?

2003-06-20 Thread Paul A Bristow
|  -Original Message-
|  From: [EMAIL PROTECTED] 
|  [mailto:[EMAIL PROTECTED] On Behalf Of Ken Hagan
|  Sent: 20 June 2003 11:27
|  To: [EMAIL PROTECTED]
|  Subject: [boost] Re: Re: Math Constants Formal Review - 
|  using namespacestoselectfloat size is simpler?
|  
|  Paul A Bristow wrote:
|   This scheme may offer more surprises to the many naive 
|  users (like me) 
|   than an explicit (and convenient 'global') choice, for example:
|  
|   using boost::math::double_constants;
|  
|   to ensure that the expected size is used.
|  
|   One can make the compiler warn you about size conversions, 
|  whereas I 
|   have the impression that these rules will mean that you 
|  won't get any 
|   warnings.
|  
|  AFAICT, you will either get exactly the type that the 
|  context requires, or a diagnostic from the compiler saying 
|  that it is ambiguous. (This assumes that a selection of 
|  possible types are available for each
|  constant.) I don't think it is possible to get a quiet conversion.
|  
|  In contrast, using boost::math::double_constants does 
|  allow you to get an implicit size conversion, where the 
|  context requires one. Compilers will warn, but only if users 
|  haven't disabled the warning.
|  
|  So, with the clever approach, all users will find 
|  themselves writing a few explicit conversions to avoid 
|  ambiguities. With the simple approach, only users who have 
|  their warning level right up will need to write explicit 
|  conversions, and then only to silence the compiler.
|  
|  Listening to compiler warnings is something the community 
|  might want to encourage (so the simple approach would then 
|  have no advantages), but it isn't the job of a language 
|  standard to mandate good programming practices. (In the 
|  absence of such constraints, the definition of good tends 
|  to change with time.)

I am grateful for this reassurance that conversions not expected,
though I hope the compiler writers know the rules -
as Michael Caine would have said Not a lot of people know that :-) 

Thanks too for your accurate summing up of pros and cons.
My slight preference for the 'simple' approach 
perhaps came from always setting the compiler to warn about
conversions and carefully 'casting them all away'.
Alas Microsoft have not chosen this as their default
and it is not obvious how to ensure that
all files are compiled with the right warning option.
(After getting some helpful hints, I previously posted guidance
on how to modify common.js to achieve this).

But I am persuaded that the 'casting' method is OK.

Paul



___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Package for Cygwin distribution

2003-06-20 Thread Reed Hedges
Quoting David Abrahams [EMAIL PROTECTED]:

  Neal D. Becker Jun 19 2003 3:32PM
  If you are starting out, why not use current cygwin?
 
 
  Do you mean why not use GCC (3.2)?
  Because 3.2 is buggy and I can't use it with my software.  And it
  would take me quite some time to download it on my slow dialup
  line. When 3.3 is available, then I can use that.
 
 
 http://gcc.gnu.org:
 
 May 14, 2003 GCC 3.3 has been released. 
 
 However, in my experience GCC 3.3 is a lot worse than 3.2, so if
 you've got problems with 3.2 I'd think twice before upgradin


I meant, when 3.3 is available on Cygwin. 3.3 works great on my Linux system. It
seems to be working fine with Boost so far, at least for the parts that I use. 
What exactly do you have trouble with when you have used 3.3?  

Has there been any extensive testing yet of 3.3 and Boost? 
 
Do you think that 3.3 will be a problem for me if I build a Cygwin package for
Boost?

Like I said, If I were to build a real cygwin package, I would use the most
recent compiler available for Cygwin.


reed


-- 
Reed Hedges
[EMAIL PROTECTED]
http://zerohour.net/~reed
___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Licence Proliferation (background and explanation)

2003-06-20 Thread Beman Dawes
At 08:49 AM 6/15/2003, John Maddock wrote:

I've been working on an automated tool to extract and present a list of
boost licences in effect for a given boost library (or collection of
files).
Although the tool is working well, it's throwing up a lot of licences 
that
are used by just one or two files, and which are only very subtly 
different
from licences in use elsewhere in boost (by different I mean that they 
use
different words, not just formatting, or punctuation).  My guess is that
most of these are accidental changes, and if that is the case then it 
would
make things a whole lot easier if they could be changed to match other
existing boost licences - from a lawyers point of view,  why should a
commercial body wanting to use Boost have to review 50 almost but not 
quite
identical licences, when just two or three variants would do?

Thoughts, comments?

A single standard Boost licence is really the best solution for most 
libraries, IMO.

A draft should be ready in a few days.

--Beman

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


RE: [boost] Advanced match constants scheme - interval lower andupper values

2003-06-20 Thread Paul A Bristow
It may helpful to those unfamiliar to the Boost Interval library
to see some exactly representable values of pi
(from test_pi_interval.cpp)

// Float 24 bit significand, 32 bit float 
// static const float pi_f_l = 13176794.0f/(1  22);
// static const float pi_f_u = 13176795.0f/ (1  22);

// Exactly representable values calculated using NTL.
static const float pi_f_l = 3.141592502593994140625F;
static const float pi_f_u = 3.1415927410125732421875F;

cout  pi_f_l  =   pi_f_l  endl; // pi_f_l  = 3.1415925
cout  pi_f_u  =   pi_f_u  endl; // pi_f_u  = 3.14159274

// double 53-bit significand, 64 bit double  //
cout.precision(17); // significant digits10
// static const double pi_d_u = 3537118876014221.0f/(1  51);
// compiler chokes :-( - divide by zero! so need a cunning
trick:
|  static const double pi_d_l = (3373259426.0 + 273688.0 / (121)) /
(130);
// Or the NTL calculated exact representation:
static const double pi_d_l =
3.141592653589793115997963468544185161590576171875;
static const double pi_d_u =
3.141592653589793560087173318606801331043243408203125;
cout  pi_d_l =   pi_d_l  endl; // pi_d_l =
3.1415926535897931
cout  pi_d_u =   pi_d_u  endl; // pi_d_u =
3.141592653589794

// Long double 64 bit significand values, 80 bit ///
cout.precision(21); // significant digits10

//static const long double pi_l_l = 7244019458077122842.0f/(1 
62)
//static const long double pi_l_u = 7244019458077122843.0f/(1 
62);
// Compiler will choke! or an even more cunning trick will be
needed.
static const long double pi_l_l =
3.14159265358979323829596852490908531763125210004425048828125L;
static const long double pi_l_u =
3.141592653589793238729649393903287091234233203524017333984375L;

cout  pi_l_l =   pi_l_l  endl; // 3.1415926535897931
cout  pi_l_u =   pi_l_u  endl; // 3.1415926535897931

and there are also 128 bit too, but I won't bore you further :-)

|  [*] It is not even true. Due to double rounding troubles, 
|  using a higher precision can lead to a value that is not the 
|  nearest number.

Is this true even when you have a few more digits than necessary?
Kahan's article suggested to me that adding two guard decimal digits
avoids this problem.  This why 40 was chosen.

Consistency is also of practical importance - in practice, don't all
compilers read decimal digit strings the same way and will end up with
the same internal representation (for the same floating point format),
and thus calculations will be as portable as is possible?  This is
what causes most trouble in practice - one gets a slightly different
result and wastes much time puzzling why.

|  So maybe the interface should provide four 
|  values for each constant at a given
|  precision: an approximation, the nearest value, a lower 
|  bound, and an upper bound.

Possible, but yet more complexity?

Paul

|  -Original Message-
|  From: [EMAIL PROTECTED] 
|  [mailto:[EMAIL PROTECTED] On Behalf Of 
|  Guillaume Melquiond
|  Sent: 20 June 2003 13:28
|  To: Boost mailing list
|  Subject: Re: [boost] Advanced match constants scheme
|  
|  
|  On Thu, 19 Jun 2003, Augustus Saunders wrote:
|  
|   PS I'd like to hear more views on this -
|   previous review comments were quite different,
|   being very cautious about an 'advanced' scheme like this.
|  
|  I didn't react to this review at first because I was a bit 
|  disappointed by the content of the library. It was more like 
|  some questions about the best way to represent constants in 
|  a C++ library. And since I already had given my thoughts 
|  about that, I didn't feel the need to speak about it again.
|  
|   Disclaimer: I am neither a mathemetician nor a scientist 
|  (I don't even 
|   play one one TV).  I do find the prospect of writing natural, 
|   effecient, and precise code for solving various equations 
|  a worthwhile 
|   goal.  So, since you asked for comments, here's my non-expert 
|   thoughts.
|  
|   As I understand it, the original proposal's goal was to provide 
|   conveniently accessible mathematical constants with 
|  precision greater 
|   than current hardware floating point units without any unwanted 
|   overhead and no conversion surprises.  Additionally, it 
|  needed to work 
|   with the interval library easily.  To work around some compilers' 
|   failure to remove unused constants or poor optimization, 
|  we wound up 
|   discussing function call and macro interfaces.  Nobody, 
|  however, is 
|   thrilled with polluting the global namespace, so unless Paul 
|   Mensonides convinces the world that macro namespaces are a 
|  good thing, 
|   some of us need convincing that macros are really the way to go.
|  
|  I am not really interested in macros. I would prefer for the 
|  library to only provide one kind of 

Re: [boost] Interest in a message buffer class for the boost library?

2003-06-20 Thread Paul Vanlint
So there seems to be some interest at least from a couple of people.

To further clarify what I have, here is a piece of code using my msg buffer
class to do a few different things and also, a snippet of the public part of
my class definition.

At one point there were functions which allowed data to be directly read
from and written to a TCP socket, however that relied on another custom
class which I did not write, so I have removed it.

It also uses a custom logging mechanism which I did write, but it is
probably best if I remove that and simply write the error messages to cerr



#include iostream

#include msg_buffer.h

using namespace std;

int
main()
{
  Msg_buffer tmp;

  char* sample_str1 = 01234567890123456789012345678901234567890123456789;
  char* sample_str2 = Test1;
  char* sample_str3 = Test2;
  char* sample_str4 = Test3;
  int sample_int1 = 257;
  char sample_char1 = '!';

  // Writing data directly into the buffer structure,
  // e.g. how it may be done from a socket
  // In this case, we are simulating an incoming
  // stream giving 1 byte at a time
  size_t total_len = strlen(sample_str1)+1;
  char* str_ptr = sample_str1;
  size_t chunk_size = 1;
  char* buff_ptr;
  do
{
  size_t len = tmp.get_write_chunk(buff_ptr, chunk_size);
  memcpy(buff_ptr, str_ptr, len);
  total_len -= len;
  str_ptr += len;
} while (total_len  0);

  // Add data into buffer using structured data functions
  tmp.put_msg_params(s, sample_str1);

  unsigned int reserved_id = tmp.reserve_space(sizeof(unsigned int));
  size_t written = tmp.put_msg_params(sdcss, sample_str2,
 sample_int1, sample_char1,
 sample_str3, sample_str4);

  // Back fill the reserved space
  tmp.fill_reserved(reserved_id, u, written);

  // At this point, data buffer should have folloiwng data in order:
  char* str1;
  char* str1b;
  unsigned int uint1;
  char* str2;
  int int1;
  char char1;
  char* str3;
  char* str4;

  // Read data from buffer.
  // Note that for strings, we are simply getting a pointer to the
  // Buffer instead of copying out for performance reasons
  if (false == tmp.get_msg_params(ssusdcss, str1, str1b, uint1,
  str2, int1, char1, str3, str4))
{ cout  Error in get_msg_params()\n; }
  else
{
  cout  Retrieved:
\n  str1 = \  str1
\\n  str1b = \  str1b
\n  uint1 =   uint1
\\n  str2 = \  str2
\\n  int1 =   int1
\n  char1 = '  char1
'\n  str3 = \  str3
\\n  str4 = \  str4  \\n;
}
}



Will display:

Retrieved:
  str1 = 01234567890123456789012345678901234567890123456789
  str1b = 01234567890123456789012345678901234567890123456789
  uint1 = 23
  str2 = Test1
  int1 = 257
  char1 = '!'
  str3 = Test2
  str4 = Test3


class Msg_buffer
{
public:
  Msg_buffer();
  ~Msg_buffer();

  bool has_data();
  size_t get_total_data_len();
  size_t get_chunk_count();

  // This function resets pointers back to the beginning of the buffers
  // and does any housekeeping that is required.
  // After running reset_buffer(), the object may be treated as brand new.
  // It is suggested that the same buffer be reused to reduce overhead of
  // memory allocation.
  // Any previously allocated memory will be assumed to be freed at this
  // point even though we may actually keep it allocated for efficiency.
  bool reset_buffer();

// Read functions
  // Gets next char in buffer without incrementing read ptr
  bool peek(char ch);

  // Read a formatted parameter list from the message buffer
  bool get_msg_params(char const* fmt, ...);

  // Read raw data from message buffer into buff_ptr
  bool get(unsigned int len, char const** buff_ptr);

// Write functions
  // This is used if we need to mark a point in the message that will be
  // filled in later, such as length fields which go at the front.
  // It returns a handle to the reserved space, not a pointer
  // A return handle of 0 means error
  unsigned int reserve_space(size_t len);

  // This allows us to fill a previously reserved space.
  // The reserved_id is the handle returned by reserve_space
  bool fill_reserved(unsigned int reserved_id, char const* fmt, ...);

  // This allows us to write a formatted parameter list into the message
  // buffer
  // Returns the number of bytes written, -1 means error
  ssize_t put_msg_params(char const* fmt, ...);

  ssize_t put(char ch);
  ssize_t put(char const* str_ptr);
  ssize_t put(ssize_t len, char const* data_ptr);
  ssize_t put(int value);
  ssize_t put(unsigned int value);
  ssize_t put(unsigned long value);

  // These would be used by TCP functions to access buffer chunks
  // directly. Note that successive calls to this function will return
  // successive chunks in the chain.
  // If request_len is zero, then it returns the rest of the buffer
  // For get_read_chunk, it only returns chunks and chunk pieces up to
  // the current write 

[boost] Re: Math Constants Formal Review - using namespaces toselectfloatsize is simpler?

2003-06-20 Thread Gennaro Prota
On Thu, 19 Jun 2003 19:51:31 +0100, Paul A Bristow
[EMAIL PROTECTED] wrote:

|  Well, you wanted to know what is likely to be accepted. In a 
|  formal review (this isn't anymore, AFAIU, is it?) I would 
|  vote no to your approach.

But would you vote yes if the only presentation was Daniel's method?

Overall? What I like in his code is the use of template specialization
to provide the values for different types, and the possibility to add
UDTs. OTOH, I'm not sure whether the conversion function template and
the pre-built binary operators (see ADD_OPERATOR) are worth having.
Personally I won't probably use those operators, as they are just
syntactic sugar to avoid an explicit cast of the constant instance to
the type of its (left or right) neighbour. Of course that's
different from what we are used to with built-in constant variables
(floating point promotion). Example: if I have

 const double pi = ...;
 
then in

 pi * f * d

the language assures me that f is promoted to a double as well and the
result is a double. When I use class pi instead the selected type of
the constant (through pi_value) becomes the one of the adjacent
operand, which is quite a different thing. So you have a familiar
appearance (you write the expression as if 'pi' had a built-in type,
with the usual operators etc.) with an unfamiliar behavior. I'm not
saying that is necessarily wrong but, not having any concrete
experience with it, I'm not sure it is a good thing 
either. However, one can explicitly static_cast, so those who want to
be safe can be.

Of course, if you leave there the conversion function but not the
operators you end up with the same behavior in most cases, thanks to
the built-in candidates in overload resolution. And a conversion
function can't be made 'explicit' (for now?). A possibility could be
to use some floating point promotion traits (one trivial
implementation is in the Yahoo files section), to select the
*promoted* neighbour type, but that's probably not worth the trouble.

Summarizing: I think Daniel is on the right track, but I'm not
particularly fond of the automatic conversions (actually they are
automatic selections of the type, based on the context of usage -
and since the selection mechanism is different from the built-in
one...)


I am only really concerned that the accurate constants become more
standardised, in name and value.

Yes. Just to stimulate discussion, and without any offence towards
Daniel's solution, this is an approach without the conversion function
and the operator overloads. Beware that it's completely untested.


namespace math
{
// Generic base class for all constants
template typename T /*, template class  class F*/ 
struct constant {};

// value_type selector basically says what's the
// template class (pi_value, gamma_value, etc.) whose
// specializations give the different values. It avoids
// using template template parameters, which aren't
// supported by all compilers.

templatetypename T
struct value_selector;

template typename U, typename T
U as(const constantT  c) {
value_selectorT::typeU::spec obj;
return obj();
};


// Here's the definition for pi for some
// usual types (can be extended by UDTs)
//
templatetypename T struct pi_value;

template struct pi_valuefloat
{ float operator()() { return 3.14f; } };

template struct pi_valuedouble
{ double operator()() { return 3.1415; } };

template struct pi_valuelong double
{ long double operator()() { return 3.141592L; } };


// Here's the single line to create a useful interface
struct pi_type : constant pi_type  {} pi;

// Here's the line to tell that the value of type T
// for the constant pi is given by pi_valueT
template  struct value_selectorpi_type
{
template typename U
struct type {
typedef pi_valueU spec;
};

};

//-
// another constant (pi ** 2)
template typename T  struct pi2_value;
template struct pi2_valuefloat
{ float operator()() const { return 9.87f; } };
template struct pi2_valuedouble
{ double operator()() const { return 9.8696; } };
template struct pi2_valuelong double
{ long double operator()() const { return 9.8696044L; } };

struct pi2_type : constantpi2_type {} pi2;

template  struct value_selectorpi2_type
{
template typename U
struct type {
typedef pi2_valueU spec;
};

};

//---
// Some obvious (?) constants:
//
#define CONSTANT_VALUE( name, value ) \
template typename T  struct name##_value  \
{ T operator()() const { return value; } }; \
\
struct name##_type : constantname##_type {} name; \
  

Re: [boost] [Graph] Improved creation of visitors from functionobjects

2003-06-20 Thread Vladimir Prus
Douglas Gregor wrote:
 Creating new visitors in the BGL can be a pain, because it may require a
 lot of extra typing for simple cases. I'd like to add the ability to attach
 function objects to visitor events like this:

   dfs_visitor()
 .do_on_back_edge(var(has_cycle) = true)
 .do_on_tree_edge(bind(vectoredge::push_back, ref(tree_edges), _1));

 I'd really prefer on_XXX instead of do_on_XXX, but GCC trips over the
 former syntax. Anyway, the code is ready to check in if there are no
 objections. The patch isn't very large, but is bigger than I would like to
 post here.

Hi Doug,
did you commit that patch already? I can't find anything named do_on in 
current CVS.

- Volodya


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


[boost] tokenizer comments

2003-06-20 Thread Vladimir Prus

I have a few comments regarding the tokenizer library.

1. The documentation says that char_delimiters_separator is default parameter 
to 'tokenizer' template, and at the same time says that 
'char_delimiters_separator' is deprecated. I think that's confusing and 
default parameter should be changed to 'char-separator'.

2. The token interator description is very brief. Specifically, it does not 
say what that iterator is usefull for, or when it's preferrable to direct use 
of tokenizer. The only way to construct the iterator is via 
make_token_iterator function which takes two interators as arguments. The 
meaning of those arguments is not documented.

Lastly, the usage example 

   typedef token_iterator_generatoroffset_separator::type Iter;
   Iter beg = make_token_iteratorstring(s.begin(),s.end(),f);
   Iter end = make_token_iteratorstring(s.end(),s.end(),f);   
   for(;beg!=end;++beg){

appears to be just longer than tokenizer use:

   typedef tokenizer offset_separator  tok_t;
   tok_t tok(s, f);
   for(tok_t::iterator i = tok.begin(); i != tok.end(): ++i) {

so I *really* wonder what this iterator is for. OTOH, if it could be used 
like:

   for(token_iterator offset_separator  i(s, f), e; i != e; ++i) {
   }

it would be definitely simpler and easier. Is something like this possible?

3. The 'escaped_list_separator' template could have default argument for the 
first parameter, Char.

4. I almost always try to use tokenizer when values are separated by commas. 
Believe me or not, I'm always confused as to which tokenizer function to use.
This time, I read all docs for char_separator and only then used escaped_list 
separator -- which does the work out of the box. Maybe, a different name, 
like csv_with_escapes_separator or extended_csv_separator would help?
It would make immediately clear what this separator is for.

- Volodya

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


RE: [boost] Advanced match constants scheme

2003-06-20 Thread Guillaume Melquiond
On Fri, 20 Jun 2003, Paul A Bristow wrote:

[snip]

 |  [*] It is not even true. Due to double rounding troubles,
 |  using a higher precision can lead to a value that is not the
 |  nearest number.

 Is this true even when you have a few more digits than necessary?
 Kahan's article suggested to me that adding two guard decimal digits
 avoids this problem.  This why 40 was chosen.

I don't know if we are speaking about the same thing. I am not sure I will
be clear if I try to explain it directly so I will just give an example.
The number are supposed to be stored in decimal in order to clarify the
reasoning (it also works with binary constants, juste replace non-zero
digits by some 1).

  the constant is 1.00050001236454786005785305678
  you write it with 7 seven digits: 1.000500
  the floating-point format only uses 4 digits
  so the compiler rounds the number to: 1.000 (round-to-nearest-even)
  but the nearest value was: 1.001 (since the constant is  1.0005)

I hope it was clear enough. If you use more digits than necessary, the
value you finally obtain may not be the nearest one. And it doesn't have
anything to do with the compiler, with the number of digits used, with the
radix, etc.

With the most common constants and formats, I don't think the problem
arises. But it is still possible: a string of zeros at the wrong place is
enough (and when the only digits are 0 and 1, it is not that uncommon to
get such a string).

 Consistency is also of practical importance - in practice, don't all
 compilers read decimal digit strings the same way and will end up with
 the same internal representation (for the same floating point format),
 and thus calculations will be as portable as is possible?  This is
 what causes most trouble in practice - one gets a slightly different
 result and wastes much time puzzling why.

This problem doesn't depend on the compiler. If all the compilers read
digit strings the same way and apply the same kind of rounding, they will
all fail the same way. It is an arithmetic problem, not a compilation
problem.

 |  So maybe the interface should provide four
 |  values for each constant at a given
 |  precision: an approximation, the nearest value, a lower
 |  bound, and an upper bound.

 Possible, but yet more complexity?

Yes, but also more correct. Most people will rely on the approximation
(it's the 40 digits values you are providing) to do their computations.
But there may be people who expect to have the nearest value. They will
get it if the library is able to provide it, and will get a compilation
error otherwise. There is absolutely no surprise this way.

Guillaume

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] tokenizer comments

2003-06-20 Thread Pavol Droba
Hi,

I have no comment about the tokenize library, but if your are interested
in the stuff like that, you can have a look into the sandbox.

string_algo library already contains this functionality 
( along with other interesting features ) and it is implemented in more generic way. 

Documentation is not updated yet ( please be patient ), but you can have
a look into tests to examples how the framework works.

Check $sanbox$/boost/string_algo/split.hpp ( and related headers )
for algorithms 
and $sandbox$/boost/string_algo/classification.hpp for supporing functors.

$sandbox$/libs/string_algo/test/iterator_test.hpp contains some  usage examples.

Regards,
Pavol.

PS: 

Just a small comment. Library ( and especially split/tokenize part )
is dependant on Boost.iterator_adaptors. Unfortunately it is not yet ported
to the new version. Conclusion is, that you cannot build tests directly
from the sandbox. 
If you are looking to use the lib, copy all the headers from string_algo subdir
( and/or cummulative headers string_algo.hpp and string_algo_regex.hpp )
into your boost tree. Everything should work fine then.


On Fri, Jun 20, 2003 at 03:45:28PM +0400, Vladimir Prus wrote:
 
 I have a few comments regarding the tokenizer library.
 
 1. The documentation says that char_delimiters_separator is default parameter 
 to 'tokenizer' template, and at the same time says that 
 'char_delimiters_separator' is deprecated. I think that's confusing and 
 default parameter should be changed to 'char-separator'.
 
 2. The token interator description is very brief. Specifically, it does not 
 say what that iterator is usefull for, or when it's preferrable to direct use 
 of tokenizer. The only way to construct the iterator is via 
 make_token_iterator function which takes two interators as arguments. The 
 meaning of those arguments is not documented.
 
 Lastly, the usage example 
 
typedef token_iterator_generatoroffset_separator::type Iter;
Iter beg = make_token_iteratorstring(s.begin(),s.end(),f);
Iter end = make_token_iteratorstring(s.end(),s.end(),f);   
for(;beg!=end;++beg){
 
 appears to be just longer than tokenizer use:
 
typedef tokenizer offset_separator  tok_t;
tok_t tok(s, f);
for(tok_t::iterator i = tok.begin(); i != tok.end(): ++i) {
 
 so I *really* wonder what this iterator is for. OTOH, if it could be used 
 like:
 
for(token_iterator offset_separator  i(s, f), e; i != e; ++i) {
}
 
 it would be definitely simpler and easier. Is something like this possible?
 
 3. The 'escaped_list_separator' template could have default argument for the 
 first parameter, Char.
 
 4. I almost always try to use tokenizer when values are separated by commas. 
 Believe me or not, I'm always confused as to which tokenizer function to use.
 This time, I read all docs for char_separator and only then used escaped_list 
 separator -- which does the work out of the box. Maybe, a different name, 
 like csv_with_escapes_separator or extended_csv_separator would help?
 It would make immediately clear what this separator is for.
 
 - Volodya
 
 ___
 Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


[boost] Re: tokenizer comments

2003-06-20 Thread Alisdair Meredith
Vladimir Prus wrote:

 1. The documentation says that char_delimiters_separator is default parameter
 to 'tokenizer' template, and at the same time says that
 'char_delimiters_separator' is deprecated. I think that's confusing and
 default parameter should be changed to 'char-separator'.

I was about to make a similar comment myself.  Been using tokenizer
quite a bit this week myself g

Another comment on the docs is that around half the examples do not
indicate the expected result, which makes it hard for me when look
between the examples to quicky find the variation I am after.  The
examples are well chosen to use a similar data set, but trying to work
out which iterators will skip whitespace, which only return values,
feels very 'try it and see'.

I feel a little guilty lending criticisms though, as the library worked
straight-out-the-box and is that wonderful combination of simple to use
and powerful.  Nice job g

-- 
AlisdairM

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


[boost] Re: Advanced match constants scheme

2003-06-20 Thread Gennaro Prota
On Fri, 20 Jun 2003 18:30:48 +0200 (CEST), Guillaume Melquiond
[EMAIL PROTECTED] wrote:

On Fri, 20 Jun 2003, Paul A Bristow wrote:

[snip]

 |  [*] It is not even true. Due to double rounding troubles,
 |  using a higher precision can lead to a value that is not the
 |  nearest number.

 Is this true even when you have a few more digits than necessary?
 Kahan's article suggested to me that adding two guard decimal digits
 avoids this problem.  This why 40 was chosen.

I don't know if we are speaking about the same thing.

I don't know either. What I know is the way floating literals should
work:


  A floating literal consists of an integer part, a decimal point,
  a fraction part, an e or E, an optionally signed integer exponent,
  and an optional type suffix. [...]
  If the scaled value is in the range of representable values for its
  type, the result is the scaled value if representable, else the
  larger or smaller representable value nearest the scaled value,
  chosen in an implementation-defined manner.


Of course the nearest means nearest to what you've actually written.
Also, AFAICS, there's no requirement that any representable value can
be written as a (decimal) string literal. And, theoretically, the
chosen in an implementation-defined manner above could simply mean
randomly as long as the fact is documented.

Now, I don't even get you when you say more digits than necessary.
One thing is the number of digits you provide in the literal, one
other thing is what can effectively be stored in an object. I think
you all know that something as simple as

 float x = 1.2f;
 assert(x == 1.2);

fails on most machines.


Genny.

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


[boost] Formal Review Request: Numeric Conversions

2003-06-20 Thread Fernando Cacciola
Hi All!

I hereby request a formal review of the Numeric Conversions library,
which can be found here:

http://groups.yahoo.com/group/boost/files/numeric_conversions.zip

Here's an except of the 'Overview' documentation section:

--
The Boost Numeric Conversion library is a collection of tools
to describe and perform conversions between values of different
numeric types.

The library includes a special alternative for a subset of
std::numeric_limits, the bounds traits class, which provides
a consistent way to obtain the boundary values for the range
of a numeric type.

It also includes a traits class conversion_traits which describes
the compile-time properties of a conversion from a source
to a target numeric type.
Both arithmetic and user-defined numeric types can be used.

A policy-based converter object which uses conversion_traits
to select an optimized implementation is supplied.
Such implementation uses an optimal range checking code suitable
for the source/target combination.
The converter's out-of-range behavior can be customized via
an OverflowHandler policy.
For floating-point to integral conversions, the rounding mode can
be selected via the Float2IntRounder policy.
can be passed via a RawConverter policy.
The optimized automatic range-checking logic can be overridden
via a UserRangeChecker policy.
-

Fernando Cacciola





___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Advanced match constants scheme

2003-06-20 Thread Guillaume Melquiond
On Fri, 20 Jun 2003, Gennaro Prota wrote:

  |  [*] It is not even true. Due to double rounding troubles,
  |  using a higher precision can lead to a value that is not the
  |  nearest number.
 
  Is this true even when you have a few more digits than necessary?
  Kahan's article suggested to me that adding two guard decimal digits
  avoids this problem.  This why 40 was chosen.
 
 I don't know if we are speaking about the same thing.

 I don't know either. What I know is the way floating literals should
 work:

   A floating literal consists of an integer part, a decimal point,
   a fraction part, an e or E, an optionally signed integer exponent,
   and an optional type suffix. [...]
   If the scaled value is in the range of representable values for its
   type, the result is the scaled value if representable, else the
   larger or smaller representable value nearest the scaled value,
   chosen in an implementation-defined manner.

I know this part of the standard. But it doesn't apply in the situation I
was describing. I was describing the case of a constant whose decimal (and
consequently binary) representation is not finite. It can be an irrational
number like pi; but it can also simply be a rational like 1/3.

When manipulating such a number, you can only give a finite number of
decimal digit. And so the phenomenon of double rounding I was describing
will occur since you first do a rounding to have a finite number of
digits and then the compiler do another rounding (which is described by
the part of the standard you are quoting) to fit the constant in
the floating-point format.

Just look one more time at the example I was giving in the previous mail.

 Of course the nearest means nearest to what you've actually written.
 Also, AFAICS, there's no requirement that any representable value can
 be written as a (decimal) string literal. And, theoretically, the

I never saw any computer with unrepresentable values. It would require it
manipulates numbers in a radix different from 2^p*5^q (many computers use
radix 2, and some of them use radix 10 or 16; no computer imo uses radix
3).

 chosen in an implementation-defined manner above could simply mean
 randomly as long as the fact is documented.

The fact that it does it randomly is another problem. Even if it was not
random but perfectly known (for example round-to-nearest-even like in the
IEEE-754 standard), it wouldn't change anything: it would still be a
second rounding. As I said, it is more of an arithmetic problem than of a
compilation problem.

 Now, I don't even get you when you say more digits than necessary.

What I wanted to say is that writing too much decimal digits of a number
doesn't improve the precision of the constant. It can degrade it due to
the double rounding. In conclusion, when you have a constant, it is better
to give an exact representation of the nearest floating-point number
rather than writing it with 40 decimal digits. By doing that, the compiler
cannot do the second rounding: there is only one rounding (the one you
did) and you are safe.

 One thing is the number of digits you provide in the literal, one
 other thing is what can effectively be stored in an object. I think
 you all know that something as simple as

  float x = 1.2f;
  assert(x == 1.2);

 fails on most machines.

Yes, but it's not what I was talking about. I hope it's a bit more clear
now.

Guillaume

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


[boost] Re: Comments on the bcp tool

2003-06-20 Thread Vladimir Prus
John Maddock wrote:


 Another note is on usability. Say I create directory po and find that
 some
 files are missing. I tweak bcp source and try again. But attempt to
 override
 files fail. I remove po directory. But then bcp says the destination
 does
 not exist. It's a bit inconvenient --- maybe destination directory should
 be
 created if it does not exist. Or maybe, there should be --overwrite
 switch,
 which would simply clean up destination before doing copies.
 
 Or maybe it should just go ahead and overwrite, should be easy to fix.

Except for one problem. If the second run of bcp selects less files than
the first, and you only ovewrite files, not clean up the entire directory,
the number of files will not be reduced. Uncessasary ones will just lay in 
the directory.

 And the last note:

bcp --boost=/home/ghost/Work/boost boost/function/function1.hpp
 function1

 creates a tree of 1975 kbytes. Hmm, never though there's that many
 dependencies...
 
 Hmm, it seems to pull in more of type_traits than I would have expected,
 and that pulls in part of mpl, and then a whole load of the preprocessor
 library.  I don't think any given compiler will include half of that, but
 which half depends upon which compiler...

:-(
I start to understand why Gennadiy Rozental was saying that dependency from
program_options to function is a bit too much --- don't feel all that good
about adding 2Megs just for command line parsing. Of course, this only
matters when packaging library separately.

- Volodya


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


[boost] Re: tokenizer comments

2003-06-20 Thread Vladimir Prus
Pavol Droba wrote:

 Hi,
 
 I have no comment about the tokenize library, but if your are interested
 in the stuff like that, you can have a look into the sandbox.
 
 string_algo library already contains this functionality
 ( along with other interesting features ) and it is implemented in more
 generic way.

Hi Pavol,

I'm already aware of string_algo and using it a bit. I wasn't aware it has
tokenizer component, though.

I'll take a look.

Thanks,
Volodya

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] [BGL] Patch for nonrecursive DFS to fix stack overflow

2003-06-20 Thread Vladimir Prus

Bruce Barr wrote:
 Here's a patch to depth_first_search.hpp in BGL in version 1.30.0 of boost
 that implements nonrecursive depth first search.  This reduces or
 eliminates the problem of stack overflow that occurs with DFS in large
 graphs.  There also may be a performance gain in some cases.  If anyone has
 a test suite for BGL I'd love to hear the results.  Otherwise, it works
 exactly the same. The event points
 are all the same.

Just like Doug,
I think this patch is desirable. But I also think that a little bit of 
additional work is required from Bruce ;-)

At least for me, the algorithm is not 100% clear. When I first say the

  while(ei != ee) {
  }

loop, I though huh, iteration over all out-edges? and when I saw:

   vis.finish_vertex(u, g);

I though, huh, we're only pushed adjacent vertices to stack, why do we call 
finish_vertex. Both problems arise from the fact that ei and u are modified 
in case adjacent vertex is white. I think to avoid confusion in future, 
comment describing what's going on is in order.

- Volodya

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost