Hi, I had a look on this, this gives amazing performance. (I
myself haven't tried that yet:-p) But anyway the new syntax looks
far smarter than preparing arbitraly metacommands.

michael> I have moved this patch to the next CF.

> > Hello Heikki,
> >
> >>> As soon as we add more functions, the way they are documented needs to be
> >>> reworked too; we'll need to add a table in the manual to list them.
> >>
> >>
> >> Here is a v8 with "abs", "min", "max", "random", "gaussian" et
> >> "exponential".
> >>
> >> [...]
> >>
> >> There is no real doc, WIP...

As a quick glance on the patch, I have two simple comment.
Would you consider these comments?

1.evalInt and evalDouble are mutually calling on single expr
 node. It looks simple and seems to work but could easily broken
 to fall into infinite call and finally (in a moment) exceeds
 stack depth.

 I think it is better not to do so. For example, reunioning
 evalDouble and evalInt into one evalulateExpr() which can return
 both double and int result would do so. The interface would be
 something like the follwings or others. Some individual
 functions might be better to be split out.

  static ExprRetType evaluateExpr(TState,CState,PgBenchExpr,
                                  int *iret, double *dret);

  or

  typedef struct EvalRetVal {
    bool is_double,
    union val {
      int    *ival;
      double *dval;
    }
  } EvalRetVal;
  static bool evaluateExpr(..., EvalRetVal *ret);


2. You combined abs/sqrt/ddebug and then make a branch for
  them. It'd be better to split them even if eval*() looks to be
  duplicate.


regards,


> > Here is a v9 with a doc. The implicit typing of expressions is improved.
> >
> > I also added two debug functions which allow to show intermediate integer
> > (idebug) or double (ddebug).
> >
> >   \set i idebug(random(1, 10))
> >
> > will print the random value and assign it to i.
> >
> >
> > I updated the defaut scripts, which seems to speed up meta command
> > evaluations. The initial version does less than 2 million evals per second:
> >
> >   sh> cat before.sql
> >   \set naccounts 100000 * :scale
> >   \setrandom aid 1 :naccounts
> >
> >   sh> ./pgbench -T 3 -P 1 -f before.sql
> >   [...]
> >   tps = 1899004.793098 (excluding connections establishing)
> >
> >
> > The updated version does more than 3 million evals per second:
> >
> >   sh> cat after.sql
> >   \set aid random(1, 100000 * :scale)
> >
> >   sh> ./pgbench -T 3 -P 1 -f after.sql
> >   [...]
> >   tps = 3143168.813690 (excluding connections establishing)
> >
> >
> > Suggestion:
> >
> > A possibility would be to remove altogether the \setrandom stuff as the
> > functionality is available through \set, maybe with an error message to
> > advise about using \set with one of the random functions. That would remove
> > about 200 ugly locs...
> >
> > Another option would be to translate the setrandom stuff into a \set
> > expression, that would maybe save 100 locs for the eval, but keep and expand
> > a little the "manual" parser part.
> 
> I have moved this patch to the next CF.
> -- 
> Michael

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to