-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On Wed, Jun 08, 2005 at 10:02:34AM +1000, Grant Parnell - slug wrote:
> Essentially we agree on only looking at times of the day where packing
> activity is occurring. The jury is out on whether you can assume the
> orders were processed sequentially based on docketdate.
Due to the nature of SQL not being a turing-complete language
(i.e. you can be SURE that a pure-SQL query will always complete,
possibly after a long time has elapsed) it turns out that it is much
easier to detect the gaps between activity that to detect the
overlap region during activity.
For example:
junk=> select * from orders;
ordno | pickslip_printed | docketdate
- -------+---------------------+---------------------
1 | 2005-06-01 14:32:16 | 2005-06-01 14:34:47
2 | 2005-06-01 15:12:27 | 2005-06-01 15:27:26
3 | 2005-06-01 15:12:28 | 2005-06-01 15:30:25
4 | 2005-06-01 15:12:29 | 2005-06-01 15:21:53
5 | 2005-06-01 15:41:29 | 2005-06-01 16:12:07
6 | 2005-06-01 15:41:32 | 2005-06-01 16:11:45
7 | 2005-06-01 15:41:33 | 2005-06-01 15:52:17
8 | 2005-06-01 15:41:33 | 2005-06-01 15:49:15
9 | 2005-06-01 15:41:34 | 2005-06-01 15:48:30
10 | 2005-06-01 15:41:34 | 2005-06-01 15:45:56
11 | 2005-06-01 15:53:00 | 2005-06-01 15:57:57
12 | 2005-06-01 15:53:00 | 2005-06-01 16:01:33
13 | 2005-06-01 15:53:00 | 2005-06-01 16:00:24
14 | 2005-06-01 16:02:25 | 2005-06-01 16:04:00
15 | 2005-06-01 16:02:26 | 2005-06-01 16:08:09
16 | 2005-06-01 16:22:40 | 2005-06-01 16:34:49
17 | 2005-06-01 16:22:41 | 2005-06-01 16:36:26
18 | 2005-06-01 16:22:42 | 2005-06-01 16:37:52
19 | 2005-06-01 16:22:42 | 2005-06-01 16:25:59
(19 rows)
junk=> select o1.ordno, o2.ordno, o2.pickslip_printed - o1.docketdate as gap
from orders as o1, orders as o2
where o1.docketdate < o2.pickslip_printed
and not exists
(
select * from orders as o3
where ( o3.docketdate > o1.docketdate and o3.docketdate <
o2.pickslip_printed )
or ( o3.pickslip_printed > o1.docketdate and o3.pickslip_printed
< o2.pickslip_printed )
or ( o3.pickslip_printed = o2.pickslip_printed and o3.ordno <
o2.ordno )
);
ordno | ordno | gap
- -------+-------+----------
1 | 2 | 00:37:40
3 | 5 | 00:11:04
5 | 16 | 00:10:33
7 | 11 | 00:00:43
12 | 14 | 00:00:52
(5 rows)
junk=> select max(docketdate) - min(pickslip_printed) as whole_day, count(*) as
work_done from orders;
whole_day | work_done
- -----------+-----------
02:05:36 | 19
(1 row)
junk=> select '02:05:36'::interval - '01:00:52'::interval as neat_time;
neat_time
- -----------
01:04:44
(1 row)
On the basis that the total time less the idle time must be the time spent
working you can say that over the day you got a usable output of 19 orders
processed at an input cost of 1 hour and 5 minutes spent working. That's
an average of 3 minutes and 24 seconds per order.
It's a backwards way of doing things but at least it can be all done in
one single query (give or take a translation from my PostgreSQL code into
your MySQL database). Note that the query that finds the gaps still crosses
the table onto itself and then onto itself again so in the simple case it
processing time will be of cubic order. However, you can safely work with
one day at a time so if you build a view (or temporary table in MySQL) that
has just one days orders then the cubic time probably won't hurt you too
bad because the entire temp table will fit into memory cache. I'm sure that
some clever index structures will speed it up.
Of course, if SQL supported a "running total" aggregate function then it
could be processed as a sequential event list but then the "order by"
clause would have an effect on the data in the output lines and I think
there is a rule that makes such effects illegal. It just goes to show that
while SQL is a pretty handy database language, it does have its limitations.
Which is another way of saying that procedural languages will never die
because ultimately some problems only solve easily by using a procedural
language.
> We'll never know, based on this data, how long each specific order took
> but then I don't care - baring exceptional cases (eg 2 hours).
Looking at the average over a day is probably the most meaningful
figure you can get. You can average by number of orders processed or
by number of products picked or by some fugde-factor algorithm that
combines both.
- Tel ( http://bespoke.homelinux.net/ )
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.6 (GNU/Linux)
iQIVAwUBQqe+V8fOVl0KFTApAQIhcRAAmnqVT/LkQt9qEknCCG20i4U/+oz5+SJ8
0pTWaOo+308k8eezoF30N4tbJpkFWWKU1HQ/fg4lqcPlZnQbHml+DwFkOzrDQja+
Y+gRQyztzJUmoKEsojHoWidWka12kqrVqaJjMpxgVI9UMfMGqkgWKTJ8aay0S3RK
79p1YCJiBdjrsUT9Wj3BSXqWmViZfX6A8OXfWaet5uZCEMZjJT5TUUcQSvX5Lm5Y
KvjlxJFFZyO2GaWdmJtH7zS1ONRSZj31WUzSknIcXwSx5XNVf6EKtYf+56WzUhfF
KnUqZNws6Ln06G5SlrszsNZPOezFhnkTrZNxGIvQ+gPXY827cYE6Mz66OOugmPpm
QoLGTNyhgNz28XKxDf2iBC5tN5L4nTva0FcBX8Mll3OHC3gBhHQLddJnhbFM+mpt
1UHVFX5e35ki5A5ABZu87NGbpY8dAq6QuIivpVMBNG3sSGQZtZCmFuHsOhub+W3q
3I5wEFMmCscaPaND0hQ+SPh9g1XcJzfKD0+ExeclPFzyT54r+WFoR75NECPxhepi
eRvivYBY/J5GBK6vfLR6JlTGgXw8WzojoxCUArssI7phx1fPK4LMfJu5gikILDNT
ZkGw8a0Bl1QZ24sF/ZhiB3SPmNuMDY6vEJadxBa85iyeaLzsQSzU+PctUZE2ntBF
HXS8aZmgNvU=
=9JAS
-----END PGP SIGNATURE-----
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html