Re: Unique user id

2004-03-31 Thread Ray Daignault
Try having the CRON perform a Phantom command. So, your UV batch job would
be PHANTOM JOBNAME.

Also, don't allow multiple jobs to kick off at the same time, schedule at
least 1 min between jobs and you should be right.

Cheers,

Ray D
- Original Message - 
From: Anthony Dzikiewicz [EMAIL PROTECTED]
To: U2 Users Discussion List [EMAIL PROTECTED]
Sent: Wednesday, March 31, 2004 12:00 PM
Subject: RE: Unique user id


 I'm not sure how to get the user number unique.  I do know that the pid is
 always unique.  Is it that you are up against the ol 'at /cron' can only
 have one thing running at a time ?  If that's it then you may have to do
 some work.  What we do here is we have a phantom running as a job.monitor,
 which process all of our batch jobs.  With this there is a whole system of
 commands (SUBMIT.JOB, CANCEL.JOB, etc...).  We have our cron / at jobs log
 in quick and do a SUBMIT.JOB myjob.  This way the cron process works for a
 second.  It basically tells the 'other guy' to do the work.  We have been
 successful with this.

 Anthony Dzikiewicz

  -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]  On
 Behalf Of Drew Henderson
 Sent: Wednesday, March 31, 2004 1:24 PM
 To: U2 Users Discussion List
 Subject: Unique user id

 Hey gang,

 We've moved to a RH Linux platform from an HP-UX box, and have started
 using the
 at subsystem for running batch jobs.we've run into a problem with
 the jobs all having
 the same user number...any suggestions about how to get these to be
 unique?

 Thanks,
 Drew

 --
 --
 Drew Henderson There are two types of people -
 Dir. for Computer Center Operations those who do the work and those
 [EMAIL PROTECTED] who take the credit. Try to be
 in the first group, there is
 110 Ginger Hall less competition.
 Morehead State University   Indira Ghandi
 Morehead, KY  40351
 Phone: 606/783-2445   Fax: 606/783-5078
 --


 --
 u2-users mailing list
 [EMAIL PROTECTED]
 http://www.oliver.com/mailman/listinfo/u2-users

 -- 
 u2-users mailing list
 [EMAIL PROTECTED]
 http://www.oliver.com/mailman/listinfo/u2-users



---
Outgoing mail has been found to be Virus Free by AVG.  Caution
is still suggested for opening any EMail from an unknown source.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.645 / Virus Database: 413 - Release Date: 3/28/2004

-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Re: D3 on NT

2004-02-11 Thread Ray Daignault
Is the conversion to an existing ORACLE application or is your
client writing a new Oracle application?

Also, how old is the D3?  Does it support ODBC?  Pull the data
out by ODBC to a TEXT ODBC target and then load it into
Oracle.

If it's a new application, you might want to take a look at
ON-WARE.  It's a product that will allow you to keep the business
rules within your application but use Oracle as a datastore.

Chees,

Ray D
- Original Message - 
From: Dahn Finard [EMAIL PROTECTED]
To: U2 Users Discussion List [EMAIL PROTECTED]
Sent: Wednesday, February 11, 2004 6:23 AM
Subject: D3 on NT



 Although I have been working in many pick flavors for the past 20+ years,
I
 have been working in Universe for the past 8 years. I have a client that
is
 looking for a conversion out of D3/NT to Oracle. I have two questions;
 1. does D3 support the OPENSEQ and WRITESEQ that Universe does. I
downloaded
 the d3 basic manual and found the UOPEN and UCREATE.
 2. Could the D3 experts in the group please offer any suggestion and
 information  about there experiences in conversions from D3. I know that
 this is not the direction that we would like to see software going in, but
I
 did not make the decision about the companys IS goals.


 Dahn Finard


 -- 
 u2-users mailing list
 [EMAIL PROTECTED]
 http://www.oliver.com/mailman/listinfo/u2-users


-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Re: Thanks - Was [UV] Resize - Dynamic or 64 bit?

2004-02-05 Thread Ray Daignault
Separation is pretty much unlimited under uniVerse. However from a practical
perspective,  you should try to tailor your groupsize to something that
would accommodate your data.  For example if your AVERAGE record size is 8k
in length, you will have records both larger than 8k and smaller than 8k.

One thing you wish to avoid in universe is large record fragmentation. If
your record will not fit into a single group, uniVerse will allocate a disk
block on the end of the file to store the majority of the data.  What is in
excess is stored in it's proper primary group with flags and pointers
indicating where the rest of the data is stored.  This will force multiple
physical disk reads and disk head movement (logical reads are performed from
disk cache and are not as expensive). So, since the disk subsystem is the
slowest part of your physical computer system, it's best to avoid physical
disk reads.

A Separation of 32 (16k groups) will ensure that you avoid any oversized
blocks. The original reason   for choosing smaller separations was based on
disk block sizes on older unix systems.  Older unix systems used to have
either a 1k (exl316, 320, 325) or 2k (pyramid) block read.  So on a Pyramid,
all unix reads were performed as if a separation of 4 existed (even if your
file was using a Separation of 1 like some PICK migrated systems).

By the way, a larger separation with smaller records would result in
excessive CPU usage as uniVerse, at the group level, effectivly performs a
string search for the key requested.  A Larger separation with small records
would result in more searching for the record.

Currently, systems like HP and AIX will a unix disk block size of 64k, but
this can be tailored in the kernel.  NT by default uses a read blocksize of
4k under NTFS partitions.

Regards,

Ray Daignault
- Original Message - 
From: [EMAIL PROTECTED]
To: U2 Users Discussion List [EMAIL PROTECTED]
Sent: Thursday, February 05, 2004 9:22 AM
Subject: Re: Thanks - Was [UV] Resize - Dynamic or 64 bit?


 Thanks Ray and all of those who replied to my questions.  I wound up
choosing
 the 64bit option.  For the record, I am in favor of using distributed
files.  I have many situations where I use them. However, I really didn't
have the time to come up with a good algorithm to achieve even distribution.
The file is made up of 50 transaction chunks of inventory history.  One item
can have numerous records.  It looks like

 Internal Part# = sequential number assigned at item creation time.

 Field 4 of the master record is a counter of history records.

 So, if part# 1, field 4 = 2, I would have...

 Key = 1*1
 1 Stockroom ]  (MV associated to 1, Max of 50)
 2 Trans Qty ]  (MV associated to 1, Max of 50)
 3 Trans Uom ]  (MV Associated to 1, Max of 50)
 ...
 25 Tran Type]  (MV Associated to 1, Max of 50)

 Key = 1*2
 1 Stockroom ]  (MV associated to 1, Max of 50)
 2 Trans Qty ]  (MV associated to 1, Max of 50)
 3 Trans Uom ]  (MV Associated to 1, Max of 50)
 ...
 25 Tran Type]  (MV Associated to 1, Max of 50)

 When I have time, I will be changing to something like 15 or 20
transactions
 per record to decrease the record sizes.  I am also going to be changing
to a distributed file so that maintenance becomes less time consuming.

 Ray, you mentioned changing to a separation of 32 to get around
performance
 hits when accessing the file.  I thought that the maximum recommended
separation
 was 16?  Has this changed?

 Thanks again to all who responded in my moment of need.

snip
  With records that size I'd also be looking at the separation figure.
It's a
  really awkward record size for storing in hashed files.  You need a
large
  separation (perhaps 32); otherwise many - most - of your records will be
treated
  as oversized, incurring an I/O penalty when accessing them.  For Dynamic
files,
  the best you can achieve is 4KB groups, which mitigates against this
choice.
snip

-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users