SV: Performance Discussion - Unidata

2004-02-26 Thread Björn Eklund
Hi Martin,
we have equipment that looks a lot like yours, same server but with double
the amount of CPU and RAM.
We also have an external SAN storage(FC disks 15000 rpm) where all the
unidata files resides. 
When we started the system for the first time everything we tried to do was
very slow. After tuning the storage kabinett's cache we got an acceptable
performance.

After some time we started looking for other ways of improving performance
and did a resize on all our files. The biggest change was from blocksizze 2
to 4 on almost every file. This made an improvement of about 50-100%
perfomance on our disk intense batchprograms.
I don't remeber any figures on speed regarding reads and writes but I can
ask our unixadmin to dig them up if you want.

It's just a guess but I do belive that Unidata rely heavily on Solaris
buffers.

Regards
Björn

-Ursprungligt meddelande-
Från: Martin Thorpe [mailto:[EMAIL PROTECTED]
Skickat: den 25 februari 2004 19:13
Till: [EMAIL PROTECTED]
Ämne: Performance Discussion - Unidata


Hi guys

Hope everybody is ok!

To get straight to the point, system as follows:

SunFire V880
2x1.2GHZ UltaSparc3cu Processors
4GB RAM
6x68GB 10krpm FC-AL disks
96GB backplane

Disks are grouped together to create volumes - as follows:

Disk 1   -   root, var, dev, ud60, xfer -   RAID 1  (Root Volume 
Primary Mirror)
Disk 2   -   root, var, dev, ud60, xfer -   RAID 1  (Root Volume 
Submirror)
Disk 3   -   /u-   RAID 10 
(Unidata Volume Primary Mirror - striped)
Disk 4   -   /u-   RAID 10 
(Unidata Volume Primary Mirror - striped)
Disk 5   -   /u-   RAID 10 
(Unidata Volume Submirror - striped)
Disk 6   -   /u-   RAID 10 
(Unidata Volume Sumkfs -F ufs -o 
nsect=424,ntrack=24,bsize=8192,fragsize=1024,cgsize=10,free=1,rps=167,nbpi=8
275,opt=t,apc=0,gap=0,nrpos=8,maxcontig=16 
/dev/md/dsk/d10 286220352
bmirror - striped)

UD60   -   Unidata Binary area
XFER   -   Data output area for Unidata accounts (csv files etc)
/U -   Primary Unidata account/database area.

If I perform tests via the system using both dd and mkfile, I see speeds 
of around 50MB/s for WRITES, 60MB/s for READS, however if a colleague 
loads a 100MB csv file using READSEQ into a Unidata file, not doing 
anything fancy, I see massive Average Service Times (asvc_t - using 
IOSTAT) and the device is usually always 100% busy, no real CPU overhead 
but with 15MB/s tops WRITE. There is only ONE person using this system 
(to test throughput).

This is confusing, drilling down I have set a 16384 block interlace size 
on each stripe and the following info for the mounted volume:

mkfs -F ufs -o 
nsect=424,ntrack=24,bsize=8192,fragsize=1024,cgsize=10,free=1,rps=167,nbpi=8
275,opt=t,apc=0,gap=0,nrpos=8,maxcontig=16 
/dev/md/dsk/d10 286220352

in /etc/system I have set the following parameters:

set shmsys:shminfo_shmmni=1024
set shmsys:shminfo_shmmax=8388608
set shmsys:shminfo_shmseg=50
set msgsys:msginfo_msgmni=1615
set semsys:seminfo_semmni=100
set semsys:seminfo_semmns=985
set semsys:seminfo_semmnu=1218

set maxpgio=240
set maxphys=8388608

I have yet to change the throughput on the ssd drivers in order to break 
the 1MB barrier, however I still would have expected better performance. 
UDTCONFIG is as yet unchanged from default.

Does anybody have any comments?

Things to try in my opinion:

I think I have the RAID correct, the Unidata TEMP directory I have 
redirected to be on the /U RAID 10 partition rather than the RAID 1 ud60 
area.

1. Blocksizes should match average Unidata file size.

One question I have is does Unidata perform its own file caching? can I 
mount filesystems using FORCEDIRECTIO or does Unidata rely heavily on 
the Solaris based buffers?

Thanks for any information you can provide

-- 
Martin Thorpe
DATAFORCE GROUP LTD
DDI: 01604 673886
MOBILE: 07740598932
WEB: http://www.dataforce.co.uk
mailto: [EMAIL PROTECTED]

-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users
--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Re: SV: Performance Discussion - Unidata

2004-02-26 Thread Martin Thorpe
Hi Bjorn

I agree with you regarding caching - I was unsure as to wether Unidata 
does its own file caching (as ORACLE does), in which case you could then 
mount the Unidata volume as DIRECTIO but with it not - its not an 
option. The problem is that any Unidata operation in terms of major disk 
access seems to slaughter io, it could be the most efficient program and 
if it is allowed to run freely (with no delays) you get massive average 
service times (usually up around a second which is totally un-acceptable 
for one person) and very poor write rates (under 20MB/s).

A couple of things I have thought about is playing around with the 
system file caching in terms of the UFS Write throttle (sd_max_throttle 
- to limit the queue) and the UFS high/low water marks to see if I can 
pull down the service times. The read/write speed is not really an issue 
to me as long as its consistant and at an acceptable limit, but the 
biggest thing to me is the average service times, as this causes 
headaches for everyone else.

With a 1 second delay every 100 records in the mentioned program (code 
attached) the average service times are normal and you dont notice any 
problems with server response times.
With a 1 second delay every 1000 records you start to notice a slight 
deterioration in the system.
Running freely, you notice a major problem, service time is around a second.

It is as though the buffers overflow to the point that they are clogged 
and any disk operations by other processes while this type of program is 
running, suffer greatly due to the high service times.

I'm going to try researching this, but wondered if anybody had any 
further information for me? I will post my results if anyone is interested.

Björn Eklund wrote:

Hi Martin,
we have equipment that looks a lot like yours, same server but with double
the amount of CPU and RAM.
We also have an external SAN storage(FC disks 15000 rpm) where all the
unidata files resides. 
When we started the system for the first time everything we tried to do was
very slow. After tuning the storage kabinett's cache we got an acceptable
performance.

After some time we started looking for other ways of improving performance
and did a resize on all our files. The biggest change was from blocksizze 2
to 4 on almost every file. This made an improvement of about 50-100%
perfomance on our disk intense batchprograms.
I don't remeber any figures on speed regarding reads and writes but I can
ask our unixadmin to dig them up if you want.
It's just a guess but I do belive that Unidata rely heavily on Solaris
buffers.
Regards
Björn
-Ursprungligt meddelande-
Från: Martin Thorpe [mailto:[EMAIL PROTECTED]
Skickat: den 25 februari 2004 19:13
Till: [EMAIL PROTECTED]
Ämne: Performance Discussion - Unidata
Hi guys

Hope everybody is ok!

To get straight to the point, system as follows:

SunFire V880
2x1.2GHZ UltaSparc3cu Processors
4GB RAM
6x68GB 10krpm FC-AL disks
96GB backplane
Disks are grouped together to create volumes - as follows:

Disk 1   -   root, var, dev, ud60, xfer -   RAID 1  (Root Volume 
Primary Mirror)
Disk 2   -   root, var, dev, ud60, xfer -   RAID 1  (Root Volume 
Submirror)
Disk 3   -   /u-   RAID 10 
(Unidata Volume Primary Mirror - striped)
Disk 4   -   /u-   RAID 10 
(Unidata Volume Primary Mirror - striped)
Disk 5   -   /u-   RAID 10 
(Unidata Volume Submirror - striped)
Disk 6   -   /u-   RAID 10 
(Unidata Volume Sumkfs -F ufs -o 
nsect=424,ntrack=24,bsize=8192,fragsize=1024,cgsize=10,free=1,rps=167,nbpi=8
275,opt=t,apc=0,gap=0,nrpos=8,maxcontig=16 
/dev/md/dsk/d10 286220352
bmirror - striped)

UD60   -   Unidata Binary area
XFER   -   Data output area for Unidata accounts (csv files etc)
/U -   Primary Unidata account/database area.
If I perform tests via the system using both dd and mkfile, I see speeds 
of around 50MB/s for WRITES, 60MB/s for READS, however if a colleague 
loads a 100MB csv file using READSEQ into a Unidata file, not doing 
anything fancy, I see massive Average Service Times (asvc_t - using 
IOSTAT) and the device is usually always 100% busy, no real CPU overhead 
but with 15MB/s tops WRITE. There is only ONE person using this system 
(to test throughput).

This is confusing, drilling down I have set a 16384 block interlace size 
on each stripe and the following info for the mounted volume:

mkfs -F ufs -o 
nsect=424,ntrack=24,bsize=8192,fragsize=1024,cgsize=10,free=1,rps=167,nbpi=8
275,opt=t,apc=0,gap=0,nrpos=8,maxcontig=16 
/dev/md/dsk/d10 286220352

in /etc/system I have set the following parameters:

set shmsys:shminfo_shmmni=1024
set shmsys:shminfo_shmmax=8388608
set shmsys:shminfo_shmseg=50
set msgsys:msginfo_msgmni=1615
set semsys:seminfo_semmni=100
set semsys:seminfo_semmns=985
set semsys:seminfo_semmnu=1218
set maxpgio=240
set maxphys=8388608
I 

Re: Common Universe/Unidata files

2004-02-26 Thread Donald Kibbey
I suppose switching one of the machines to the other database is not an option?

 [EMAIL PROTECTED] 02/26/04 09:20AM 
This file would be accessed by quite a number of programs, many times during a day.
Thanks, everyone.
You have given us some excellent options to consider.
Randal

-Original Message-
From: Timothy Snyder [EMAIL PROTECTED]
Sent: Feb 25, 2004 3:45 PM
To: U2 Users Discussion List [EMAIL PROTECTED]
Subject: Re: Common Universe/Unidata files


Randal,

With half a million records, I strongly agree that you don't want to put
this in a directory file.

Without knowing more about what you're doing, I'd be inclined to set up a
hashed file stored in either UniVerse or UniData (flip a coin).  Then you
could put together a socket client/server setup to handle updates from the
other side.  That could be set up with the appropriate rules to handle
locking and anything else that may be required to properly negotiate
updates.  You might need to pass around some other information, such the
next key, etc., but it shouldn't be too bad.

Is this something that would need to be accessed from many programs, or
just a few?  How frequently would the file be accessed and updated?


Tim Snyder
IBM Data Management Solutions
Consulting I/T Specialist , U2 Professional Services

Office (717) 545-6403  (rolls to cell phone)
[EMAIL PROTECTED] 

Randal LeBlanc wrote on 02/25/2004 02:49:10 PM:

 We have separate applications running on Universe and Unidata. The
 apps run on separate
 aix servers.
 We would like to create a common file that can be accessed by both
 applications
 but are not sure of the best approach to take.
 Any suggestions??--
u2-users mailing list
[EMAIL PROTECTED] 
http://www.oliver.com/mailman/listinfo/u2-users 


Randal LeBlanc
JCL Associates, Inc.
[EMAIL PROTECTED] 
www.jclinc.com 
-- 
u2-users mailing list
[EMAIL PROTECTED] 
http://www.oliver.com/mailman/listinfo/u2-users

--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Is pickwiki down?

2004-02-26 Thread Karjala Koponen
Haven't been able to go to www.pickwiki.com for the last couple of days.

Karjala
--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


RE: [UV] How much do you pay for support each year?

2004-02-26 Thread Glenn W. Paschal
No.  Not true.  My client's original VAR was Monolith.  Their current var is
U2 Logic.
However, between vars, we did let our support expire, and then jumped back
in during the grace period IBM offered.

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Eugene Perry
Sent: Wednesday, February 25, 2004 10:06 AM
To: U2 Users Discussion List
Subject: Re: [UV] How much do you pay for support each year?


I have heard that if you leave your original VAR you can only go to IBM.  I
was told that no other VAR can sell you additional seats for U2.  Is this
true?

Eugene
- Original Message -
From: Dennis Bartlett [EMAIL PROTECTED]
To: 'U2 Users Discussion List' [EMAIL PROTECTED]
Sent: Wednesday, February 25, 2004 4:42 AM
Subject: RE: [UV] How much do you pay for support each year?


 Whoa! Hold on...

 The original poster of this thread never questioned the need for 
 support
 - he just questioned the need for east coast support...

 All he was asking is is there a way to directly approach
 IBM for
 support, and if not, who can he go to as a VAR on the west coast... 
 Not so difficult, hey?

 Perhaps the best response would be testimonials from those
 of you on the
 pacific seaboard about your VARs...

 A real no-brainer would be to read the question...grin

 -Original Message-
 From: Ross Ferris
 Sent: 24 February 2004 06:06
 Subject: RE: [UV] How much do you pay for support each year? [snip]

 As others have responded, UV support is a GOOD idea - and a relatively
 cheap no brainer option.


 --
 u2-users mailing list
 [EMAIL PROTECTED]
 http://www.oliver.com/mailman/listinfo/u2-users



-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users




-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


RE: UDT.OPTIONS question

2004-02-26 Thread alfkec
Try the DATE.FORMAT command. (sorry we use SB+ which deals with this)

hth
-- 
Colin Alfke
Calgary, Alberta Canada

Just because something isn't broken doesn't mean that you can't fix it

Stu Pickles


-Original Message-
From: Peter Gonzalez [mailto:[EMAIL PROTECTED]
SNIP

UDT.OPTIONS 51 ON

(then list options)

51  U_ALT_DATEFORMAT ON

when we try the above command, then LIST ORDER ENTRY.DATE.  We 
get the same 
american date format instead of european. Could somebody tell 
me what's 
missing? thanks



 Module Name Version   Licensed

UniData RDBMS 4.1 Yes

running on SCO.



Thank you,

Peter Gonzalez
Senior Programmer
-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Printing to local printer

2004-02-26 Thread Dana Baron
Hi,

We use Unidata (v5.2) on a DEC/Compaq/HP Alpha under Tru64 Unix (v5.0a).
Most of our users connect to our Unidata system via terminal emulation
software (SmartTerm) from Windows-based PCs. Some of those users function as
point-of-sale terminals. We're now trying to integrate credit card
validation via the internet directly from those work stations. We're still
in test mode, but most of this seems to working OK. One remaining issue is
printing the CC receipt. We would like the receipts to print on CC receipt
printers attached to the workstation as either parallel or serial printers.
We'd rather not set up print queues for all of these. Any ideas on how to
accomplish this?

Dana Baron
System Manager
Smugglers' Notch Resort

 --
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Re: Printing to local printer

2004-02-26 Thread Timothy Snyder




Dana,

You can use @(-24) to turn on aux printing and @(-25) to turn it off.

Tim Snyder
IBM Data Management Solutions
Consulting I/T Specialist , U2 Professional Services

[EMAIL PROTECTED]

Dana Baron wrote on 02/26/2004 02:07:20 PM:

 We would like the receipts to print on CC receipt
 printers attached to the workstation as either parallel or serial
printers.
-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


RE: Printing to local printer

2004-02-26 Thread Chuck Mongiovi
Dana,
It depends on whether your workstations are networked or not and on how many
serial printers you have .. If the workstations are networked AND the
printers are paralell, I'd reccomend JetDirect boxes - you can set them up
as printers available to the local machine, and then as queues on your UNIX
machine .. If the printers are serial, you can do something similar with
SAMBA .. If the workstations are attached through a serial connection, I'd
use AUX printing like Dianne and Timothy suggested ..
-Chuck


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Dana Baron
Sent: Thursday, February 26, 2004 2:07 PM
To: [EMAIL PROTECTED]
Subject: Printing to local printer


Hi,

We use Unidata (v5.2) on a DEC/Compaq/HP Alpha under Tru64 Unix (v5.0a).
Most of our users connect to our Unidata system via terminal emulation
software (SmartTerm) from Windows-based PCs. Some of those users function as
point-of-sale terminals. We're now trying to integrate credit card
validation via the internet directly from those work stations. We're still
in test mode, but most of this seems to working OK. One remaining issue is
printing the CC receipt. We would like the receipts to print on CC receipt
printers attached to the workstation as either parallel or serial printers.
We'd rather not set up print queues for all of these. Any ideas on how to
accomplish this?

Dana Baron
System Manager
Smugglers' Notch Resort

 --
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users

-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


mvQuery's MVQUERY_LOGIN MVQUERY_ABORT features

2004-02-26 Thread Denny Watkins
I'm trying to use the MVQUERY_LOGIN  MVQUERY_ABORT
features and just can't get them to work.
Would any mvQuery users have any sample MVQUERY_LOGIN  MVQUERY_ABORT
paragraphs they could share with me?
You can send them directly to me unless you feel some other
mvQuery users could benefit.
Thanks,

Denny Watkins
Director Computer Services
Morningside College
1501 Morningside Ave
Sioux City, Ia 51106-1717
Phone:  1-712-274-5250

Email:   mailto:[EMAIL PROTECTED][EMAIL PROTECTED]  
--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


RE: Uniobjects Asp

2004-02-26 Thread Mike Randall
Sounds like a Redback task.  Fully thread safe, and totally scalable.   

Mike R. 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Cooper, Rudy
Sent: Thursday, February 26, 2004 4:46 PM
To: [EMAIL PROTECTED]
Subject: Uniobjects  Asp

Hello Everyone,

I have a requirement to use Asp with Uniobjects.  Our OS is W2K and the
backend is UV 10.0.10.  I was thinking about creating ActiveX Dll's in VB6
that would do things like create a UV session, instantiate subr object, etc.
I read in the u2-users list archive something to the effect of Uniobjects
not being thread safe.  Does that still hold true ?  If so how do you make
it thread safe ?

thx,

rudy

--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users

-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Redback vs Raining Data's .Net Data Provider

2004-02-26 Thread Mike Randall
 
Being longtime Redback developers, we are about to evaluate Raining Data's
.Net Data Provider.   Anybody out there with any experience or comments
about the product?

Thanks,

Mike Randall





-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


RE: Redback vs Raining Data's .Net Data Provider

2004-02-26 Thread Ross Ferris
Search the archives here  I remember seeing a comment in the last 3-4 months that 
it was slw - 

Ross Ferris
Stamina Software
Visage  an Evolution in Software Development


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Mike Randall
Sent: Friday, 27 February 2004 12:58 PM
To: 'U2 Users Discussion List'
Subject: Redback vs Raining Data's .Net Data Provider


Being longtime Redback developers, we are about to evaluate Raining Data's
.Net Data Provider.   Anybody out there with any experience or comments
about the product?

Thanks,

Mike Randall





--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.595 / Virus Database: 378 - Release Date: 25/02/2004


---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.595 / Virus Database: 378 - Release Date: 25/02/2004
 
--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


[AD] Viságe.BIT Whitepaper (draft) and test drive now available

2004-02-26 Thread Ross Ferris
Finally there is an affordable, high performance Data Warehouse/Business Intelligence 
facility that is fully multi-value aware ? 

As part of the growing Viságe family of products, Viságe.BIT enables you to 
incorporate 21st century Data Warehouse/Business Intelligence capabilities into your 
existing applications.

BIT cubes are defined by simply dragging  dropping fields from your existing 
database, and as a true MOLAP product, you can look at EVERY POSSIBLE COMBINATION of 
dimensions, rather than having to define discrete views.

Get the full story, and have a play with a Viságe.BIT cube at 
http://www.stamina.com.au/Products/Visage/Visage_BIT.htm 

If you are going to Spectrum next month, drop by the American Computer Technics table 
 talk to the guys about you DW/BI, GUI, Reporting and Web Development needs 


Ross Ferris
Stamina Software
Visage - an Evolution in Software Development
What will YOU say ...


---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.595 / Virus Database: 378 - Release Date: 25/02/2004
 
--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


RE: odbc client hangs

2004-02-26 Thread John Jenkins

two-pennorth
Any chance this relates to DNS lookup or doamin authentication??
/two-pennorth

Regards

JayJay 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of gerry simpson
Sent: 26 February 2004 16:52
To: [EMAIL PROTECTED]
Subject: odbc client hangs

I have uvpe 10.0.4 and uvodbc 3.7 installed on a w2k server.

I had been able to establish connections to a remote universe installation (
hpux 11 uv 10 ) using  uvodbc through a vpn connection.
somewhere along the line things have gone wrong.

now when i try to connect to the remote server , even just do a test via
odbc config , the client hangs and the cpu usage goes to 100% and stays
there indefinately.
if i try to connect to the local universe server ( same machine ) the
connection goes through but takes much too long and still takes 100% cpu
until the connection is established.

i tried disabling all universe services , uninstalling odbc client ,
rebooting  reinstalling odbc client all to no effect.
I have another w2ks setup which has no problem at all connecting to the same
remote universe installation - this second machine however does not have
universe installed on it.
all uv related dlls I have looked at are the same version on both machines.

turning server log on from both clients shows that the client is hanging
just after the server has responded to the GetInfo request and the server is
left sitting Reading synchronization token ... until the client resets the
connection - which happens of course when the client process is
cancelled/killed.

the archive mentions this problem as having been fixed in odbc 3.7

anyone seen this or have any suggestions ?

gerry

--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


[UV] ODBC / OleDB server logging

2004-02-26 Thread Bob Gerrish
I have searched high and low, and have found many references to
UniVerse ODBC / OleDB server logging, where to find logs, etc. but
I fail to find how to turn on logging on Unix under UniVerse 9.5.1.1
(or higher).  I would imagine since this is Unix, there is somewhere
a -d  or -v is used on uvosrv or some other binary.  The second piece
is when logging gets turned on, is there a way to tell how long a
query took on the server so we can compare that with the query
time on the server?
Thanks,
Bob Gerrish  -  [EMAIL PROTECTED]


--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


RE: Printing to local printer

2004-02-26 Thread KevinJ Jones
You can use a porter switch that can be controlled by software. I have a
site with 4 printers with different forms in each hooked to one parallel
port. The first characters in the print jobs are the printer and
character size we want to print at. 




Kevin Jones
(315) 445-4270

 [EMAIL PROTECTED] 2/26/04 3:03:02 PM 
Tim and Diane,

Thanks for the quick replies! I guess I need to be a little more
specific.
We really want to have our cake and eat it to.

We use AUX ON / AUX OFF for local printers already. The problem is, we
want
to be able to specify which local printer to use as the AUX. So, when
the
user is printing a CC receipt, it would print to the printer attached
to the
PC that is defined as the CC receipt printer. But, when the user is
trying
to print a report, it would print to the local laser printer (which
could
also be a networked printer). One way to do this is to have the user
switch
the local printer within SmartTerm, but we wanted to try to handle this
in
the software, if possible, because...well, you know how users are.

Thanks

Dana Baron
System Manager
Smugglers' Notch Resort


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] 
Behalf Of Timothy Snyder
Sent: Thursday, February 26, 2004 2:51 PM
To: U2 Users Discussion List
Subject: Re: Printing to local printer




Dana,

You can use @(-24) to turn on aux printing and @(-25) to turn it off.

Tim Snyder
IBM Data Management Solutions
Consulting I/T Specialist , U2 Professional Services

[EMAIL PROTECTED] 

Dana Baron wrote on 02/26/2004 02:07:20 PM:

 We would like the receipts to print on CC receipt
 printers attached to the workstation as either parallel or serial
printers.
--
u2-users mailing list
[EMAIL PROTECTED] 
http://www.oliver.com/mailman/listinfo/u2-users 

-- 
u2-users mailing list
[EMAIL PROTECTED] 
http://www.oliver.com/mailman/listinfo/u2-users
-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


RE: Printing to local printer

2004-02-26 Thread Dana Baron
Tim and Diane,

Thanks for the quick replies! I guess I need to be a little more specific.
We really want to have our cake and eat it to.

We use AUX ON / AUX OFF for local printers already. The problem is, we want
to be able to specify which local printer to use as the AUX. So, when the
user is printing a CC receipt, it would print to the printer attached to the
PC that is defined as the CC receipt printer. But, when the user is trying
to print a report, it would print to the local laser printer (which could
also be a networked printer). One way to do this is to have the user switch
the local printer within SmartTerm, but we wanted to try to handle this in
the software, if possible, because...well, you know how users are.

Thanks

Dana Baron
System Manager
Smugglers' Notch Resort


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Timothy Snyder
Sent: Thursday, February 26, 2004 2:51 PM
To: U2 Users Discussion List
Subject: Re: Printing to local printer




Dana,

You can use @(-24) to turn on aux printing and @(-25) to turn it off.

Tim Snyder
IBM Data Management Solutions
Consulting I/T Specialist , U2 Professional Services

[EMAIL PROTECTED]

Dana Baron wrote on 02/26/2004 02:07:20 PM:

 We would like the receipts to print on CC receipt
 printers attached to the workstation as either parallel or serial
printers.
--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users

-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Re: Is pickwiki down?

2004-02-26 Thread Ian McGowan
ah, wendy has put a lot of effort into providing examples and tips. 
nice to see someone is actually using it :-)

the temporary site name is

http://fido.trinitycapital.com/cgi-bin/wiki.pl (fido since it's an
intrusion detection system :-)

i restored from last sunday's backup, but i don't think there were
any entries since then.  any problems feel free to email me directly.

ian

On Thu, 2004-02-26 at 09:33, Karjala Koponen wrote:
 Thanks!  Just starting to build up the energy to start learning how to
 use UniObjects.  Karjala
 
  [EMAIL PROTECTED] 02/26/2004 11:42:41 AM 
 yes, sorry.
 
 there seems to be a hardware problem, so i'm going to move the site to a
 spare server here at work, and steal a small amount of bandwidth.
 
 it gets so few hits, i thought no one would notice except wendy :-)
 
 ian
-- 
Ian McGowan [EMAIL PROTECTED]

-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


RE: Is pickwiki down?

2004-02-26 Thread Wendy Smoak

 Haven't been able to go to www.pickwiki.com for the last 
 couple of days.

It appears so... I emailed Ian and asked him to check.

-- 
Wendy Smoak
--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Re: Common Universe/Unidata files

2004-02-26 Thread Randal LeBlanc
Donald,
I has been considered and is a possibility.

-Original Message-
From: Donald Kibbey [EMAIL PROTECTED]
Sent: Feb 26, 2004 8:33 AM
To: [EMAIL PROTECTED]
Subject: Re: Common Universe/Unidata files

I suppose switching one of the machines to the other database is not an option?

 [EMAIL PROTECTED] 02/26/04 09:20AM 
This file would be accessed by quite a number of programs, many times during a day.
Thanks, everyone.
You have given us some excellent options to consider.
Randal

-Original Message-
From: Timothy Snyder [EMAIL PROTECTED]
Sent: Feb 25, 2004 3:45 PM
To: U2 Users Discussion List [EMAIL PROTECTED]
Subject: Re: Common Universe/Unidata files


Randal,

With half a million records, I strongly agree that you don't want to put
this in a directory file.

Without knowing more about what you're doing, I'd be inclined to set up a
hashed file stored in either UniVerse or UniData (flip a coin).  Then you
could put together a socket client/server setup to handle updates from the
other side.  That could be set up with the appropriate rules to handle
locking and anything else that may be required to properly negotiate
updates.  You might need to pass around some other information, such the
next key, etc., but it shouldn't be too bad.

Is this something that would need to be accessed from many programs, or
just a few?  How frequently would the file be accessed and updated?


Tim Snyder
IBM Data Management Solutions
Consulting I/T Specialist , U2 Professional Services

Office (717) 545-6403  (rolls to cell phone)
[EMAIL PROTECTED] 

Randal LeBlanc wrote on 02/25/2004 02:49:10 PM:

 We have separate applications running on Universe and Unidata. The
 apps run on separate
 aix servers.
 We would like to create a common file that can be accessed by both
 applications
 but are not sure of the best approach to take.
 Any suggestions??--
u2-users mailing list
[EMAIL PROTECTED] 
http://www.oliver.com/mailman/listinfo/u2-users 


Randal LeBlanc
JCL Associates, Inc.
[EMAIL PROTECTED] 
www.jclinc.com 
-- 
u2-users mailing list
[EMAIL PROTECTED] 
http://www.oliver.com/mailman/listinfo/u2-users

--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Randal LeBlanc
JCL Associates, Inc.
[EMAIL PROTECTED]
www.jclinc.com
-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


UDT.OPTIONS question

2004-02-26 Thread Peter Gonzalez
UDT.OPTIONS 51 ON

(then list options)

51  U_ALT_DATEFORMAT ON

when we try the above command, then LIST ORDER ENTRY.DATE.  We get the same 
american date format instead of european. Could somebody tell me what's 
missing? thanks



Module Name Version   Licensed

UniData RDBMS 4.1 Yes

running on SCO.



Thank you,

Peter Gonzalez
Senior Programmer
--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Re: Common Universe/Unidata files

2004-02-26 Thread Randal LeBlanc
This file would be accessed by quite a number of programs, many times during a day.
Thanks, everyone.
You have given us some excellent options to consider.
Randal

-Original Message-
From: Timothy Snyder [EMAIL PROTECTED]
Sent: Feb 25, 2004 3:45 PM
To: U2 Users Discussion List [EMAIL PROTECTED]
Subject: Re: Common Universe/Unidata files


Randal,

With half a million records, I strongly agree that you don't want to put
this in a directory file.

Without knowing more about what you're doing, I'd be inclined to set up a
hashed file stored in either UniVerse or UniData (flip a coin).  Then you
could put together a socket client/server setup to handle updates from the
other side.  That could be set up with the appropriate rules to handle
locking and anything else that may be required to properly negotiate
updates.  You might need to pass around some other information, such the
next key, etc., but it shouldn't be too bad.

Is this something that would need to be accessed from many programs, or
just a few?  How frequently would the file be accessed and updated?


Tim Snyder
IBM Data Management Solutions
Consulting I/T Specialist , U2 Professional Services

Office (717) 545-6403  (rolls to cell phone)
[EMAIL PROTECTED]

Randal LeBlanc wrote on 02/25/2004 02:49:10 PM:

 We have separate applications running on Universe and Unidata. The
 apps run on separate
 aix servers.
 We would like to create a common file that can be accessed by both
 applications
 but are not sure of the best approach to take.
 Any suggestions??--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Randal LeBlanc
JCL Associates, Inc.
[EMAIL PROTECTED]
www.jclinc.com
-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


RE: ODBC problem - biggie and technical

2004-02-26 Thread Anthony Youngman
And you've completely missed the point ...

Yes it is a local file. But it's created as part of the bigger process
of enabling access to the account. Just creating the file is useless.
Once it's created, getting HS.UPDATE.FILEINFO to run isn't enough. It's
all the other things that prepare this account for odbc access does
that aren't getting done.

What's the point of creating the file, if the odbc daemon doesn't know
the account exists to look for it? etc etc. I don't know what's going on
under the bonnet, and if the car splutters to a halt half way down the
road EVERY time I try, I'm not going to get very far, am I?

Anyway - this problem was caused (I presume) because I kept *deleting*
the file to try and give each subsequent attempt another clear shot at
eanbling access. So it's clear that just creating it is going to do
nothing.

Cheers,
Wol

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Peter Olson
Sent: 26 February 2004 12:48
To: 'U2 Users Discussion List'
Subject: RE: ODBC problem - biggie and technical

isn't hs_file_access a local file in the account you want to export
tables
from ?
if not create it ?
then see if you can see table from client

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Anthony Youngman
Sent: Thursday, February 26, 2004 4:30 AM
To: [EMAIL PROTECTED]
Subject: ODBC problem - biggie and technical


I'm trying to enable odbc access to an account. odbc is working fine for
other accounts, so it SEEMS to be a problem with just this one account.
However, I suspect I may have damaged the HS.ADMIN account in my efforts
to get this to work :-(

Okay. What's caused the problem is wIntegrate. As you may know (various
people have moaned) it will occasionally terminate a session for no
apparent reason whatsoever. The only program that does this to us is
HS.UPDATE.FILEINFO. You can guess what's coming ...

The enable odbc access option in HS.ADMIN calls HS.UPDATE.FILEINFO as
part of its workings. It killed the session half way through ...

After several attempts to get it to behave itself, I am now getting
errors where it is unable to initialise the HS_FILE_ACCESS file. I'm not
sure whether there is supposed to be an HS_FILE_ACCESS file in the
HS.ADMIN account, but if there is, I've accidentally deleted it. It
doesn't appear to be there on my other server, though.

So I'm now stuffed. How do I get HS.ADMIN working again, or
alternatively bypass it completely to enable ODBC access. If anybody
knows what *HS.ACTIVATE does (in detail), or can slip me the source or
tell me where to find it, or can help me some other way, thanks very
much.

Cheers,
Wol




***

This transmission is intended for the named recipient only. It may
contain
private and confidential information. If this has come to you in error
you
must not act on anything disclosed in it, nor must you copy it, modify
it,
disseminate it in any way, or show it to anyone. Please e-mail the
sender to
inform us of the transmission error or telephone ECA International
immediately and delete the e-mail from your information system.

Telephone numbers for ECA International offices are: Sydney +61 (0)2
9911
7799, Hong Kong + 852 2121 2388, London +44 (0)20 7351 5000 and New York
+1
212 582 2333.



***

-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Notice of Confidentiality:  The information included and/or attached in
this
electronic mail transmission may contain confidential or privileged
information and is intended for the addressee.  Any unauthorized
disclosure,
reproduction, distribution or the taking of action in reliance on the
contents of the information is prohibited.  If you believe that you have
received the message in error, please notify the sender by reply
transmission and delete the message without copying or disclosing it. 

-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Re: odbc client hangs

2004-02-26 Thread gerry simpson
Hi John,

I don't think so , the client is connecting to the server and getting most
of the way through the connection process according to the server log.
I would think that a DNS problem would cause no connection to be made at all
and a time out error to occur - which does happen as expected if the VPN
client is not running or a bogus server is supplied
I would also expect this to happen with both machines.
My guess is that some dll or other component has gotten corrupted somewhere
although I have no idea which that might be.
For now I have moved my development efforts from the problem machine to the
OK machine as my dev machine will be getting a re-install shortly.

gerry


- Original Message - 
From: John Jenkins [EMAIL PROTECTED]
To: 'U2 Users Discussion List' [EMAIL PROTECTED]
Sent: Thursday, February 26, 2004 5:57 PM
Subject: RE: odbc client hangs



 two-pennorth
 Any chance this relates to DNS lookup or doamin authentication??
 /two-pennorth

 Regards

 JayJay

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
 Behalf Of gerry simpson
 Sent: 26 February 2004 16:52
 To: [EMAIL PROTECTED]
 Subject: odbc client hangs

 I have uvpe 10.0.4 and uvodbc 3.7 installed on a w2k server.

 I had been able to establish connections to a remote universe installation
(
 hpux 11 uv 10 ) using  uvodbc through a vpn connection.
 somewhere along the line things have gone wrong.

 now when i try to connect to the remote server , even just do a test via
 odbc config , the client hangs and the cpu usage goes to 100% and stays
 there indefinately.
 if i try to connect to the local universe server ( same machine ) the
 connection goes through but takes much too long and still takes 100% cpu
 until the connection is established.

 i tried disabling all universe services , uninstalling odbc client ,
 rebooting  reinstalling odbc client all to no effect.
 I have another w2ks setup which has no problem at all connecting to the
same
 remote universe installation - this second machine however does not have
 universe installed on it.
 all uv related dlls I have looked at are the same version on both
machines.

 turning server log on from both clients shows that the client is hanging
 just after the server has responded to the GetInfo request and the server
is
 left sitting Reading synchronization token ... until the client resets
the
 connection - which happens of course when the client process is
 cancelled/killed.

 the archive mentions this problem as having been fixed in odbc 3.7

 anyone seen this or have any suggestions ?

 gerry

 --
 u2-users mailing list
 [EMAIL PROTECTED]
 http://www.oliver.com/mailman/listinfo/u2-users


 -- 
 u2-users mailing list
 [EMAIL PROTECTED]
 http://www.oliver.com/mailman/listinfo/u2-users

-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users