Re: Callbacks

2004-06-12 Thread Rolf Kalbermatter
Craig Graham [EMAIL PROTECTED] wrote:

I'm playing with a library (on Linux) where a particular function call
requires information that it obtains from a callback function. It doesn't
return until the callback has executed, and fails if the callback pointer is
null.

Old Labviews didn't do callbacks. Can this now be done in 7.1? I'd rather
not go down the route of changing the library and maintaining a local hacked
version.

No it isn't there and in fact would be troublesome to do. No doubt it could
be done but the knowledge you would need to be able to configure such a function
properly so it doesn't crash is already so high that anyone with that knowledge
could easily come up with a wrapper DLL

Since you seem to have the DLL source available what I would do is one of two
things. Either write a wrapper DLL translating the callback into an occurrence
or adding a new LabVIEW nice function which takes an occurrence as parameter
and translates the underlying callback into an occurrence event.

The first has the advantage of not changing the actual DLL, the second would be
interesting if you have a chance of getting the source code modifications back
into the main source code tree, so that you don't need to maintain your local
copy. As such it is only suited if you have direct involvement in development
or a distributed development infrastructure as often used for Open Source
projects.

I'll be gone for a while with no internet access, but if it is not to urgent I
would be willing to show you how such a translation from callback to occurrence
could be done, after my return.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





Re: Application Error

2004-06-11 Thread Rolf Kalbermatter
Lai, Ann [EMAIL PROTECTED] wrote:

I get this error when I tried to save my program: The instruction at
'0x0081aa02' reference memory at 0x could not be 'read'.
Click on OK to terminate the program.  Click on CANCEL to debug the
program.  

There is not very much we can do with this message in itself. It is
however quite likely that you use some VIs with external code (DLL calls,
CINs) where this code does not behave as it should. It is at least the
way I usually create such errors :-)

What sort of program is this? What VI functionality do you use inside
that program. Are there any third party drivers in there calling into
external code?
Can you create a smaller application where this problem can be reproduced?

While all the other answers still might be only good for more or less
intelligent guesses, the last one would be an ideal situation to exactly
help you get this problem squashed.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





Re: Omega CYC 321-01 Temperature Controller

2004-06-11 Thread Rolf Kalbermatter
John Schafer [EMAIL PROTECTED] wrote:

Does anyone have a LabVIEW driver for this Instrument from Omega and more
importantly know the exact wiring config of the Serial Interface Port
(RJ-11) on the instrument.  I am somewhat confused because they say that its
an RS232 config but they show a GND for the RX and a GND for the TX which
typically is RS485?  Thanks for any help.

RS-485 would be more like a TX+/TX- and RX+/RX-. There is no real signal
ground only a common line more like an earth. They maybe have separate
ground lines for both transmission lines, but they are probably still
ground referenced meaning the two GND have always the same electrical
potential.  

I wouldn't let myself impress by such strange definitions and just simply
try it out, possibly with a breakout box with LEDs to check that the data
transmission is really ok.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





Re: Replace array row in a loop

2004-06-11 Thread Rolf Kalbermatter
Mihai Costache [EMAIL PROTECTED] wrote:

long explanation snipped

It's all very nice, except it takes 20 minutes (Pentium III 800MHz Win2K
LV7.1)!
I then replace the FOR loop with a formula node and inside that a workmate
of mine writes the corresponding C code to do exactly the same job.
Now it takes 10 seconds!
Is LabVIEW really worthwhile, or do we all need to go back to basics and
learn C?

Can you provide a VI somewhere where you show what you are doing? Your
explanation is a little difficult to imagine (not your fault as such things
are hard to describe at any rate) but I'm sure there are possibilities to
actually do that perfectly fast in LabVIEW only, possibly beating even the
formula node.

The problem is not that LabVIEW is so bad, but rather that it takes so much
work from you in such situations that you don't worry about optimal algorithmes
in the first place and when you do, it is sometimes hard to see the optimal
solution.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





Re: Call Library

2004-06-08 Thread Rolf Kalbermatter
Greg McKaskle [EMAIL PROTECTED] wrote:

If you need the return value to work, you will want to get it as a four 
by integer.  You won't be able to get it into a string without making 
another call to dereference the pointer, interpret it as a string, and 
copy the contents into a LV string.

Greg, you underestimate LabVIEW here :-) Luckily that is not true anymore.
Since about LabVIEW 6.0 or 6.1, the CLN has actually a string return type.
No array though (and for good reason), so if you would need to retrieve an
array like that it would get troublesome in the way you have pointed out.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





Re: [W] Is it Possible to use InPort / OutPort with PCI?

2004-06-06 Thread Rolf Kalbermatter
Martin Kunze [EMAIL PROTECTED] wrote:

Digital logic told me that they can not tell me a abs. address
for this device and they dont have a device driver for it.

No, they indeed can't tell you an absolute address as the ISA IO address
range is mapped into some address range through PnP services at system
startup and can vary from PC to PC and even between startups. That they
do not have a device driver for their board is a much more serious problem.
Somebody today selling PC hardware without at least Windows NT support
drivers is definitely in the wrong business. But hey maybe they support
Linux much better.

How can I find out the base address of a ISA device connected via a PCI bridge from 
LV?

You can try to check in the Device Manager. Usually you see the assigned resources
in there but that doesn't give you a guarantee, that this address couldn't change
after a reboot. With Linux this would be in fact easier.

How can I talk to the device from LV? Is it possible to use InPort an OutPort from LV?

Well, here we get into Plug  Play OS services with resource enumeration. The ISA
bus on the other side of the PCI bridge is typically mapped into memory space at
startup by the OS. It could also be mapped into the upper IO address space sometimes
but that is rather tricky. An ISA bus can theoretically have 64kB IO address range
and the 80x86 architecture has only 64kB IO address range so trying to map secondary
PCI bus addresses into the 80x86 IO address range has the possibility of conflicts
with other IO addresses. Therefore PCI bridges sometimes map IO addresses to memory
addresses to avoid such problems. The particularities are not that simple and in fact
that is usually the task of the device driver to use kernel PnP services to map the
addresses internally.

I have no specific experience with this device nor in fact with any ISA IO port
access through a PCI bridge and so can't really help you here but I'm afraid it
is not going to be very simple. Maybe you can influence something on BIOS level
and try to force your OS to accept the BIOS settings but in XP this chance is
also quite small as Microsoft rather wants to manage this stuff themself than
trying to work around buggy PnP BIOS implementations.

Does anybody have a device driver for things like that in connection with Win XP?

No sorry, but you could check out the portIO device driver on OpenG. As it is, it
does not care about PCI bus mapping in the function HalTranslateBusAddress() but
you could extend it to do so, eventhough Microsoft says this function is depreciated
in favor of newer PnP enumeration kernel services, which however are not available
before Win2000. I didn't want to use newer APIs as the driver should run on as much
Windows versions as possible. As to the limited mapping, I didn't feel that this would
be a huge limitation and in lack of hardware to test anything on didn't really care,
but in your case it would be necessary to actually allow for that as well. 

The portIO driver is part of the OppenG Toolkit but not currently included in the
binary distribution of the Toolkit. You can get it however from the sourceforge CVS
repository of the OpenG Toolkit or directly though CVS web access.
http://cvs.sourceforge.net/viewcvs.py/opengtoolkit/portIO/

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





RE: Recursive Code Challenge

2004-06-06 Thread Rolf Kalbermatter
Dominic Lavoie [EMAIL PROTECTED] wrote:

Could someone explain in a simple way what are the advantages of making a
function recursive instead of just making the data recursive(arrays or
strings) and building one function to play with that data(while loops)?

It's about human understanding. A recursive problem can quite simply be
implemented by a recursive algorithme with a function calling itself.
HOWEVER: recursive functions come with a cost, namely that of more resource
demands and with LabVIEW VIs being a little fatter than a typical C function
this cost is accordingly higher. Also while it is simple to implement a
recursive algorithme with recursive functions it is not so simple to
guarantee that such a function will perform properly in all corner cases.

If you implement a recursive algorithme through some sort of stack implementation
for instace, you have to spend more time on initial design and you are usually
forced to think about corner cases at the same time. Such an algorithme has at
least as good changes to actually work, but it presents itself not as naturally
to most people. 

And since most LabVIEW programmers are lazy :-) (including yours personally) they
often feel sorry that recursion does not work as simple as they would like.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





RE: [OT] NI Internal Programs

2004-06-04 Thread Rolf Kalbermatter
Subject: [OT] NI Internal Programs
From: [EMAIL PROTECTED]
Date: Thu, 3 Jun 2004 15:24:42 -0400 

I've always wondered what NI uses internally to write LabVIEW. To be more
specific, the LabVIEW core (not RT, PDA, etc.) on Windows.

What is the language breakout? (%C, %C++, %Assembler, %LV, etc.)

I would assume no assembler at all. Maybe that was different in the old
days with Windows 3.1 which had a lot of complicated things to deal with
as LabVIEW was 32bit which might be impossible otherwise, but nowadays
every line of assembly means a lot of additional effort for a system
targeted to so many different platforms.

Traditionally LabVIEW was programmed in C but they changed the compiler
to use C++ around version 6.0. Neverthless most of the old code is still
C I assume. The error messages show now all .cpp files but the code inside
can still be the old code (and from some debugging sessions I had in the
past looks like generated from standard C for large parts) with some small
adaptions. And so it probably is for a lot of things because rewriting
everything from scratch would take several years for sure not to mention
the possibility of countless compatibility problems on the lowest binary
level of everything. New functionality since LabVIEW 6.0 is most probably
however clean C++.

Since LabVIEW 7 there are also some tools written in LabVIEW itself such
as the Property Dialog the Zoom utility, etc. 

Also the extensive and automated Test Framework each LabVIEW version has
to go through is of course written in LabVIEW.

Which development environment? (MS Visual Studio, Borland, GNU??)

I think they still use Visual C on Windows (all flavors including PDA).
Probably CodeWarrior for the Mac OS Classic version and GNU for the rest.

Any third party tools? (compilers, editors, debuggers, etc.)

Not sure about compilers. Maybe Yacc/Flex for the script node interpreter
compiler. Debuggers, you can live with Visual C for a large part.
Editors probably as much as there are developers. There is simply no editor
which suits every programmer. There are people who like to use Emacs for
programming. The only Editor I don't expect to be seen used by a programmer
is probably Notepad ;-) and MS Word :-))

Bug Tracking tools? (Bugzilla, etc.)

No idea, they used to have their own bug database built on Oracle.

SCC? (SourceSafe, CVS, etc.)

I think there were mentionings of Perforce here in Info-LabVIEW.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





Re: Bird's Eye View 1.5.0 Available

2004-06-03 Thread Rolf Kalbermatter
Sheldon Instruments, Inc. [EMAIL PROTECTED] wrote:

As they did the DLL utility 'downshift', NI will probably eventually
incorporate this feature in their s/w

It is part of LabVIEW 7.1, one of the few nice features you get with that
versions. Although it can be sometimes discouraging to spend all the time
to develop something like this, to just see NI to crank out the same, in
a better or worse way, in the next version, it is definitely a very
effective way to get new features into LabVIEW. Some sort of
uber-suggestion reporting. Proofing to them it can be done, triggers some
of their developers very effectively ;-)

I had the same with my LLB viewer utility to show the contents of LabVIEW
libraries in Windows Explorer ;-) 

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]





Re: Bird's Eye View 1.5.0 Available

2004-06-03 Thread Rolf Kalbermatter
However, with 'downshift' did the guys in Rochester receive anything
for that? I hope so.

Wasn't downshift the CIN to call DLLs? If so it would seem to me a long
way between that and the current Call Library Node, especially considering
that what that CIN did, was actually something which was really provided
by the Watcom C runtime library used in LabVIEW all over the place.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





RE: Not quite off-topic: List misuse

2004-06-02 Thread Rolf Kalbermatter
Ross, Michael [EMAIL PROTECTED] wrote:

As a list administrator for a small specialized mailing list and a
participant in Info-LabVIEW I have some opinions on this topic.

This list has never been commmercial content free, but commercial
mailings are expected to be kept inobtrusive and brief.  That seems to
work and who am I to argue.

Just my two cents here. From the original post I got the impression
that the poster had previous contacts with this person, as an answer
to some question he had put on Info-LabVIEW. At that time the mailing
was appearently appreciated or at least no hard feelings were present.

If that is really the case (and I really don't know if my impression
is right) then what has been happening was that the advertiser appaerently
kept the posters address in his address book and later on sent again
some marketing mailing out to these addresses, maybe not really related
to the original inquiry.

As such the whole issue would be a little bit unclear. I for myself
probably wouldn't think to good about such an advertiser but on the
other hand I certainly wouldn't report him as a spammer to blocklists
either and most probably just delete the mail without thinking much
further. If I however receive a mailing out of the blue, it would
almost certainly be reported to the blocklists I use and maybe even
manually reported to its ISP and then disposed off.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





Re: Polymorphic functionality of Delete From Array

2004-06-02 Thread Rolf Kalbermatter
George Gatling (Contractor) [EMAIL PROTECTED] wrote:

The Delete From Array function has some interesting polymorphic behavior... 
it changes the type of one of its output terminals based on *whether* an 
input terminal is wired (as opposed to what type is wired to the input 
terminal).  In particular, if length is unwired, a scalar is output.  If 
length is wired, an array is output.  I would *love* do duplicate this 
exact behavior for a Read Row(s) From File function.  Anyone know if this 
is possible?

No, this is not possible as you want it. This dynamic adaption needs direct
manipulation of the code generator in LabVIEW and as such is only possible
in LabVIEW built-in nodes. What you can do is create polymorphic VIs and make
some of the inputs not present in all versions of the subVIs. There won't be
automatic adaption when wiring, but you can right click and select the version
of the VI you want to use. 

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





Re: Maybe off topic - freeware and commercial ad

2004-06-02 Thread Rolf Kalbermatter
Irene He [EMAIL PROTECTED] wrote:

Sure we would appreciate real freeware and thanks for it,
but there is no need to depreciate non freeware, I would
have understand if these people are from some non-profit
organization like university, but people from industry
they should accept non freeware just as with freeware if
they don't provide free service themselves.

Who doesn't like freebees? :-)
And of course it is easier to look over an advertisement for
some free tools than an aggressive commercial ad. 
However to actually sell LabVIEW add-ons is a very tough thing
for anyone who is not National Instruments. It requires usually
more in effort such as marketing, support, documentation and
such than you can in most cases earn through its sales.

Regarding the ad, I like to see some short ad in this list. I
like to get notified about new things in LabVIEW world, so that
I know one day if I need this feature, there is a way to go
rather than sitting in the dark. There are too many information
in internet, but there are only few places to hang out for me
like this forum, so I would like to get notified here to make
me feel it's powerful to work with LabVIEW, because you can do
so many things...or would you rather not knowing things that
is capable with LabVIEW?

I have nothing against a short announcement for some new tools or
products related to LabVIEW on this list, and in fact our former
list maintainer Tom had a policy of explicitedly allowing such
advertisements if they were brief and not intrusive, and offered
people in case of doubt to ask his opinion about if a message was
appropriate for the list or not. I certainly assume that this
also applies to the current list maintainer.

What I consider absolutely intolerable would be to harvest some or
all of the email addresses from the Info-LabVIEW posts and start
to mail out advertisements even for free stuff.

Hope this makes sense.

Of course it does ;-)

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





Re: Brooks Flow Controllers

2004-05-28 Thread Rolf Kalbermatter
Someone wrote:

Has anyone had any experience using Brooks Smart Series Flow
Controllers?  We've had a customer contact us, and is interested in talking
to their controllers using RS-485, but we haven't had any luck in finding
the comms protocols...

I just happened to stumble across Brooks Flow Controllers. I was looking for
some examples and documentation about accessing their Smart DDE software with
their external controller. Although we have some VIs from Brooks I did not
find them very clean and was looking for information how to access them
directly through the DDE interface of LabVIEW DSC. For that I would have to
tell the software at startup to intialize the COM port so that I only have
to open Advise Links to the Topic Items. The customer promised to come back
to me with the documentation he had at home.

Lewis Drake [EMAIL PROTECTED]

Brooks has a set of LabVIEW drivers for their controllers, check with them.
They normally charge for them, but if you negotiate well, you can get them
free.  We did.

There seem to be multiple VIs around. We have the Smart DDE ones which he got
from Brooks, not sure if for free ;-)
I found also VIs on a German reseller site for Brooks which talk directly to
the intelligent controllers. Supposedly they use the HART protocol although
I couldn't confirm from the VIs, as I have no experience with HART. Not that
they would implement any real protocol, in fact they just send a few hardwired
bytes so they would likely only work with those specific controllers.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





Re: GOOP and spawning VIs on FieldPoint RT controller

2004-05-28 Thread Rolf Kalbermatter
Tore Johnsen [EMAIL PROTECTED]

Yep, I use the template technique and pass data to the spawned VIs using 
the Set Control Value method prior to running them. In the past (for non 
time-critical VIs) I haven't bothered making the spawned VIs reentrant 
since they are copies and run just fine in parallel without (different 
story for common Sub VIs that should run in parallel).

You can have templates with template SubVIs. For that you put the template
subVI on the diagram by using the File Dialog from the function palette.
A blue T should appear in the icon.
Now on loading the template main VI LabVIEW also instantiates the template
subVIs. I used that technique a few times for communication with the template
main VIs by making the template subVI an intelligent LV2 style global. The main
VI just starts up as independant task and sits there and runs and I retrieve
the refnum to the created LV2 style template subVI to send commands and receive
data through it from other places.
Another way of communicating would be queues but my LV2 style global approach
had some nice advantages for the application in question.

PS: nice domain name by the way ;-)

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





Re: webcam

2004-05-27 Thread Rolf Kalbermatter
Irene He [EMAIL PROTECTED] wrote

Or you can try this link too:
http://www.hytekautomation.com/Products/IVision.html
Besides the image capturing, it has also many camera control functions and
image processing functions. It's WDM based, has many advantages over VFW
based drivers. (I think VFW is being abandoned).

WDM is the device driver accessing the hardware. It is not just video capture
but all modern W2K and XP drivers are usually that. VfW is one of the APIs to 
access multimedia image acquisition hardware. I think an application does
not access a WDM driver directly but uses DirectShow or whatever Direct...
for that. As such VfW is not state of the art anymore (you could say its
use is deprecated) but a lot of WDM based video grabbers still have a VfW
wrapper and failing that there is always a DirectShow to VfW wrapper. The
advantage of VfW is its simple use and a lot of applications (including many
commercial ones) still depend on it.
That said a DirectShow access to a WDM grabber driver will usually be more
performant than through VfW.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





Re: webcam

2004-05-27 Thread Rolf Kalbermatter
Scott Hannahs [EMAIL PROTECTED] wrote:

Actually, I just wrote one using one of the examples Christophe Salzmann's
quicktime interface.  Works like a dream!!!  Cross platform and handles all
camera's that have a quicktime driver.  Contact Christophe for a copy.

The sad part is that under all platforms other than the right one,
video digitizer hardware with Quicktime drivers can be counted on one
single hand. There is a Quicktime to VfW video digitizer driver for
Windows to access VfW hardware in Quicktime but elegant would be something
else.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





Re: Call Library quandry

2004-05-27 Thread Rolf Kalbermatter
Bruce Bowler [EMAIL PROTECTED] wrote:

First, I'm running a hopelessly ancient version (5.0.1) of LabView.  As
soon as I can get the boss to spring for it I'll upgrade but until then,
I'm stuck...

Ogh, 5.0.1 wasn't the most stable one. 5.1.1 was in my experience quite
better. Also be aware that while upgrading applications from one version
to the next is usually a quite easy step, upgrading from 5.0 to 7.1 will
have some good chances of rough bumps on the road.

What I've tried so far, is to create a byte array that's 46 bytes long (as
near as I can tell, that's the size of the structure.)  Converted that to
a string, passed the string to the call library function and wired the
output to a string indicator.  When I run it, I either get an error
complaining about gnrclist.c, line 192 or the dreaded illegal operation.

46 bytes is definitely not enough!! Just look at this:

typedef struct _TIME_ZONE_INFORMATION {
4 bytesLONG Bias;
32 * 2 bytes   WCHAR StandardName[32];
8 * 2 bytesSYSTEMTIME StandardDate;
4 bytesLONG StandardBias;
32 * 2 bytes   WCHAR DaylightName[32];
8 * 2 bytesSYSTEMTIME DaylightDate;
4 bytesLONG DaylightBias;
} TIME_ZONE_INFORMATION, 

Adding this together I come up with 172 bytes so now wonder LabVIEW is taken
down by Windows when you pass in 46 bytes.
Also I'm not sure if LabVIEW does this already in 5.x but later on if you pass
in a sting in a CLN LabVIEW cuts it down at the first NULL byte in it on return.
So for structures it is better to pass an array of U8 to a library function if
you need to access some information inside that structure after the function
returns.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





RE: DAQmx, etc.

2004-05-26 Thread Rolf Kalbermatter
 API was in big parts not and therefore protection was used in
the intermediate lvdaq.dll to block entire resources with semaphores while
they were in use. DAQmx is a new framework supposedly redesigned from the
ground up and blocking most probably only happens on the lowest level for
the time an external resource (hardware registers etc) are accessed but
not for an entire high level call.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





Re: DAQmx, etc.

2004-05-26 Thread Rolf Kalbermatter

While I'm certainly not an NI-488.2 expert, I believe there are both
synchronous and asynchronous functions in NI-488.2.

Not in the 488.2 spec but in the C library and/or the driver and at the
VISA call level. The problem is that there is NO NI-488.2 for OS X
(at the moment). So I am developing sth-488.2 based on the incomplete
488.2 DDK toolkit that NI provides.  The trick is how to move that
asynchronous call in the library (when I get it implemented) into the
synchronous nature of a LabVIEW call into a library.

There are two options for this.

One is using occurrences. You basically call into the external code passing
it the occurrence refnum and invoke the asynchronous C API function in there
and return back to LabVIEW. This C API signals back at completion either
through an OS event or callback and at that time you trigger the occurrence
in your C code.
To do this triggering you simply call the Occur(LVRefNum occurrence)
function. This function is not prototyped in extcode.h but is exported
by the labview.lib link library in all versions of LabVIEW.

The LabVIEW diagram waits on the occurrence with a timeout and on return
of the Wait on Occurrence you call either the external code to retrieve
the data (no timeout) or cancel the operation (timeout occurred).

This is basically the same the VISA Read node for instance does internally
although it does this in the LabVIEW code and not on a diagram. I have not
been able to find the necessary LabVIEW API calls to actually place all this
into the external code. The big problem is the implementation of the 
Wait on Occurrence in a way that LabVIEW knows that it should do other
things until the occurrence is triggered. It may also be so that this is theoretically 
doable in CINs only as CLNs do not have the necessary
additional calls such as CINAbort etc to allow such an external code to be interrupted 
from waiting when the user wants to shutdown the application.

The other solution is to actually stay in the C code and pass control back
to the OS through OS API functions while you wait on the event to occur.
I do believe this is what DAQmx does. This blocks the LabVIEW thread invoking
the CLN but allows LabVIEW and other applications to continue to make use
of the CPU. A cosmetic issue with this is at least under Windows, that in the
task scheduler LabVIEW seems to use up 100% CPU power while waiting in this
way but everything on the system is as responsive as otherwise.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





Re: Time zones and time stamps

2004-05-25 Thread Rolf Kalbermatter
 like using the same switching dates as the rest
of Europe. At that time you could easily have three or more different DST periods
in the same year in Europe alone.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





RE: Time zones and time stamps

2004-05-25 Thread Rolf Kalbermatter
George Gatling wrote:

 All the same, the seconds to date/time provides a is DST flag... so there 
 could still be a boolean input on the format to string function regarding 
 whether to ignore the DST flag.  Just a thought though.

There could if it would be used at all!! The underlying Windows API functions
most probably simply ignore this flag and so do probably all other system APIs
including the Mac.
Also note that the flag as returned from the seconds to date/time at
least until LabVIEW 6.1, does not indicate the DST state of the timestamp
entered in reference to the current DST switching times for the current time
zone but the DST state of the current system time. As such it is highly
useless or in fact even misleading.

LabVIEW 7.0 fixed that to a certain amount, at least for the seconds to date/time 
function which seems to correctly indicate the daylight saving
time flag in relation to the current locale and the actual time stamp
converted.
I say fixed it to some amount because the actual offset in seconds used
is still in relation to the current timezone and DST of the current time.

Consequently date/time to seconds blissfully ignores the DST flag
completely and just simply uses the current time zones and times offset
to convert a time cluster into a timestamp.

This behaviour although somewhat strange and illogical is however highly desirable in 
terms of functional compatibility to earlier LabVIEW versions
as many programs might suddenly misbehave if this had changed.

Also note that the actual GMT delta library I mentioned earlier on
downloadable from Developer Exchange has actually a problem when upgraded
to LabVIEW 7.0 at least in the version I downloaded sometimes last year.
It is not because of the change of above functions (which has not changed
at all in this part) but because the hidden timestamp used as reference
(Jan 1, 1980 0:00) gets modified when the VI is loaded into LabVIEW 7.0
by adding either the current DST offset or the current timezone offset
(I'm not sure which one as they are both 1 hour for me at the moment)
to it.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





FW: Time zones and time stamps

2004-05-25 Thread Rolf Kalbermatter
-Original Message-
From: Rolf Kalbermatter [mailto:[EMAIL PROTECTED]
Sent: Tue, May 25, 2004 18:39
To: 'George Gatling (Contractor)'
Subject: RE: Time zones and time stamps

George Gatling (Contractor) [mailto:[EMAIL PROTECTED] wrote:

 Alright, so then it would seem that the best solution would be to
 add an integer input to the format date time string function to 
 specify offset from UTC... and to get UTC time you would wire a
 zero to this.

I was more thinking along the lines of adding an extra formatting
specifier %^ or something like this to the beginning of the format
string, which would say to format the time in UTC instead of local
time. The advantage of this would be that in LabVIEW 7 the same
formatting specifiers are also used in the advanced formatting
settings for numeric and timestamp controls displaying absolute
time/date. Maybe an extension of this would be %offset in seconds^
to use a specific time zone offset to use.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 
 





RE: Time zones and time stamps

2004-05-25 Thread Rolf Kalbermatter
George Gatling (Contractor) [mailto:[EMAIL PROTECTED]

 Hmm... not a bad solution either.  And with no %^ the function
 would continue to use its existing behavior?

Yes, that is the idea without specific mentioning. Backwards
compatibility  is very important in such things.

 At first I think it would be more convenient to specify offset
 in hours, but in the back of my mind I vaguely remember that
 there might be some places that use something really weird, 
 like 30 minutes, for their daylight time... in which case I 
 agree that  seconds is the more consistent units to use when
 specifying the offset.

Exactly my idea. And it would not only be for DST adjustments but
to display timestamps with a specific timezone offset which as I
know for sure has even smaller adjustement requirements than 30
minutes.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]





RE: Slow Printing

2004-05-21 Thread Rolf Kalbermatter
Dan Press [EMAIL PROTECTED] wrote:

I create a separate application; let's call it the YYY_Project Print
Service.exe.  The application's job is just to sit there, but its INI
file configured the VI Server to listen on a port other then the default
3363.  See the app note - 

Overall a very interesting and good idea to do!!

Remember that built apps act like folders when you want to open a
reference to a specific VI.  Just wire the path to your printing VI such
as C:\My App\YYY_Project Print Service.exe\Print Me.VI

This is not necessary. In built applications LabVIEW (since at least ~ 6.0)
always searches the internal LLB for a VI with the last part of the path
first before even attempting to locate the VI at the absolute path if any
given. So for this one you can actually use a string with the VI name only
instead!

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





Re: DAQmx, etc.

2004-05-20 Thread Rolf Kalbermatter
Scott Hannahs [EMAIL PROTECTED] wrote:

At 7:58 -0500 5/19/04, [EMAIL PROTECTED] wrote:
If the task acquires a finite number of samples and you set this input to 
 -1, the VI waits for the task to acquire all requested samples, then reads
 those samples.

Is this a blocking wait like the non-DAQmx call into a CIN (or LLB) or
is it a LV style wait where other LV nodes even in the same thread can
execute?  That would be nice!

It is a blocking wait for the thread the Call Library Node to DAQmx is
called in and in LabVIEW before 7.0 this would cause the execution subsystem
in which the according VI runs to block entirely. LabVIEW 7.0 and higher
acquires by default several threads per execution subsystem so other VIs
beside the CLN but in the same execution subsystem can keep running.

Is there a way to do this threading in our own created dlls (frameworks)?.
I would like to make my PCI-GPIB library non-blocking but that is not
trivial because of this limitation. Is there a way to set up a call back
into LV for a non-blocking wait from a CIN or framework?

You don't need to do anything special other than making sure your DLL is
reentrant (no globals or if there are, they must be protected by mutexes,
but watch out to not create mutual exclusion locks if you might end up
using more than one mutex to protect different resources) and then set
the Call Library Node to execute reentrant (the node changes from orange
to light yellow if it is reentrant).
Before LabVIEW 7 this would block the execution subsystem anyhow unless
you reconfigure the thread allocation for that subsystem with an
undocumented VI in the vi.lib/utility/sysinfo.llb. In LabVIEW 7 it will
not block the execution subsystem but only the single thread LabVIEW uses
to call that library function.

Then again, DAQmx could be ported to all the other platforms since it was
written in such a nice modular way that it only needs a trivial stub driver
for other platforms.  :-)

Everybody would hope it's trivial but I'm sure it is anything but that ;-)

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





Re: Capturing the keyboard

2004-05-19 Thread Rolf Kalbermatter
Marco Tedaldi [EMAIL PROTECTED] wrote:

We've got a Video-Application (not in LabView) which should show a video 
on the same screen as the labwiew-application runs. The Problem ist, 
that de Application stops playing the Video as soon as it looses the 
Focus :-( But the Operator needs to write some Data in the Labview-App 
while watching the Video.

With the Initialize Keyboard.vi and the Acquire Input Data.vi I can 
get the Keyboard-Events even if the my app is in the bakcground, but the 
main-application still gets the events too, which could possibly disturb 
it :-(

There you have a dilemma. The active window is also the one which has the
keyboard focus and the whole concept of Windows is based on this. You might
be able to trick a little with this with floating windows if they would be
all in the same process but as separate process you can't make a window in
one process active and give another window in another process the key focus.
To do that you would have to deeply intervene in the Windows kernel itself.

So I'm seeking for a way to capture the events completely whitout 
another application recieving them. As Bonus it would be great, to 
forward the unused Keys to the main-app.
This should work on Windows. Portability is no issue at the Moment, so I 
  could also use System DLLs or DirectX.

Well, one way you could do this is writing a system wide keyboard hook in
a DLL but this is going to be very tricky. The reason your video stops
playing is probably because the video playback is driven by the windows
message queue itself instead of in a seperate thread which can keep running
eventhough the application is not active anymore. Using system wide keyboard
hooks which pass the key first to LabVIEW to then pass it back to the
application if LabVIEW didn't eat, it could slow down your video app very
seriously.
If my assumption of the reason the video stops playing is correct, another
way could be however to just sent the main window of your video app regularly
dummy events from within the LabVIEW app. Sending WM_NULL messages might be
already enough to keep the video playback going and then your app wouldn't
need to be active to keep playing. It would have the additional benefit of
not misbehaving if the user happened to click into the LabVIEW app window
and make it active instead.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





RE: Labview, Linux and databases

2004-05-18 Thread Rolf Kalbermatter
Michael Porter [EMAIL PROTECTED] wrote:

Second, what kind of data are you going to be storing? If you are going to
be working with large datatypes like images or waveforms, you have a much
more fundamental problem--ODBC won't work for even Windows. My personal
opinion is that ODBC's inherent limitations make it unsuitable for any sort
of serious work unless it's your absolute last hope.

I don't agree on this one. There are certainly problems possible with storing
bigger than (usually 65k) of data in one single BLOB field but it all depends
on the DB and the tools used. The biggest problem on the client side is that
some DB don't support BLOB and if they do the format how to do it is not always
the same. Also there are certainly ODBC drivers which simply fail above 32k
or 65k. And last but not least having an ODBC interface on your client side
which can cope with this is also not always standard.
We have had applications where waveforms or complete graphic images were stored
into an SQL Server database through ODBC and after some tweaking it worked fine
including retrieving the data, although using our own LabVIEW ODBC interface may
be one of the reasons it did work. 
The biggest problem is probably that eventhough BLOBs are supported in some
databases, relational databases are not very well suited to store BLOBs. It can
be done but is certainly not for what they were normally designed.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]





Re: Dealing with large file

2004-05-18 Thread Rolf Kalbermatter
[EMAIL PROTECTED] wrote:

I want to tell excel from LabView to open or import large data file.  The
data doesn't fit on a single excel worksheet (65500 rows).  are there any
techniques I can use (using Excel macro or labview) so that when one sheet
is filled the program opens another sheet until the file is open completely.

Well with a lot of VBA code stuffed in a macro you should be able to do that.
But in fact most other  spreadsheet applications out there do not have this
stupid limitation. Or you could use Diadem.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





RE: ActiveX server in Labview

2004-05-18 Thread Rolf Kalbermatter
Dalmady, Otto [mailto:[EMAIL PROTECTED] wrote:

Unfortunately that's what I was expecting but I thought I would ask.
For all it's strengths, the biggest weakness I've encountered in Labview
seems to be with it's interface to other applications (on Windows). 
I'm sure this is partly due to it's platform independent nature since
DLLs and ActiveX interfaces don't exist on non Windows machines.
It's too bad because we could do some really cool things if they were
better supported.

Well, I do think we do quite some cool things without using ActiveX. And
I do not really see to many things we couldn't do without ActiveX
 
I've also, in the past, had the need to register a callback function in a
DLL (non activeX) but the concept of passing a pointer to a LV function
doesn't exist. Wrapper DLLs, created using other compilers, are often
needed to solve these issues. This tends to undermine the use of LV in
the first place.

Not for me. If I encounter such an issue I simply write a wrapper DLL and
am done with it in a few hours. Using Active X costs me just as much time
but I have never had a problem that such a DLL suddenly didn't work after
having upgraded Windows or LabVIEW or even just installing an MS Office
application.

I think we should include these enhancements in our LabVIEW wish list
(hope you are listening Mr McKaskle). Perhaps NI developers can somehow
tie activex into the event structure so a programmer can define events
that correspond to activeX methods. Both the event and method would be
created by the programmer. Also, allow for pointer to function capabilities
for DLL callbacks. 

I have some other peewees I would rather have NI spend their time with ;-)
I think having Active X methods representing event states would completely
obscure the event architecture in LabVIEW. I already find it less than clean
the way it is now.

Function pointer capabilities is another problem. It could be done in principle
with a VI getting translated in a function pointer but the setup of this would
be quite complex, more in fact than the current Call Library function dialog.
A lot of possible problems exist with multithreading or the context in which
the Callback is called, e.g. interrupt time, being some of them.

Active X itself specifically distinguishes between Methods, Properties, and
Events. Why go and confuse these things altogether in LabVIEW?

Your mileage obviously varies but I try to avoid ActiveX as much as possible
and have been quite successful in it.

My reasons:
- Windows only
- still unstable and that is not only the problem of the LabVIEW ActiveX interface
- installation issues with registration of the ActiveX server and version issues
- A very complex technology to create a sort of universal interface which typically
  has low level changes in every new Windows version
- A lot of ActiveX object developers don't understand ActiveX and consequently create
  unstable or unusable objects
- Most Standard Windows objects change so often, either in their registered name
  or even worse in their entire Active X Object interface. 
- Windows only and proprietary

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
  





Re: system exec question

2004-05-18 Thread Rolf Kalbermatter
Bruce Bowler [EMAIL PROTECTED] wrote:

I'm really looking for an unzip VI, but since I'm running an ancient
(5.0.1) version of labview, I'm guessing that the SystemExec vi is the way
to go (Open G unzipper seems to require a more modern version of LabView).

So now to my question...

Is there a way, in the 5.0.1 version of SystemExec, to capture the output
of the command other than making a redirection part of the command?

I don't think there was a System Exec with that feature for LabVIEW 5.0.1,
but you may have to see on the NI Developer site as they did at some point
have such a VI there which then got included in one of the next LabVIEW
releases. But not sure if that was for 5.0.
Chances are also that saving a 6.0 VI back to 5.0 might work but I'm not
sure about the internals of the CIN used there. It may make use of CIN
functionality not supported in LabVIEW 5.

The lvZip library from OpenG however should be possible to downgrade to
LabVIEW 5.x. There are only two VIs I think will cause problems and that
are the Build Error Cluster.vi and the List Directory Recursive.vi both
because they use the Delete Array primitive which wasn't available in
LabVIEW 5.x.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





RE: ActiveX server in Labview

2004-05-15 Thread Rolf Kalbermatter
Dalmady, Otto [EMAIL PROTECTED] wrote:

Using Windows XP, 2000, NT, 98, or 95,
Is it possible to have a compiled LabVIEW .exe act as a full Automation
server with my own defined Methods and Properties that can be
called/accessed as the VI runs (outside of the development environment)?

I'm sorry to have to tell you that this won't be possible the way you describe
it. I doubt that you could do that in MS Office or such either.
The LabVIEW Active X Server Interface is a generic LabVIEW Interface to most
of its VI server methods and properties and with a little work on the client
side, does allow you to run, execute and query any VI inside the application
as if it would be another method of the Active X server though. It's not as
point and click in for instance VB as if the methods would be directly exported
as part of the Active X Object model but it gets very close to that.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





RE: Corrupted llibrary due to crash! Any Idea?

2004-05-13 Thread Rolf Kalbermatter
Lewandowski, James R. [EMAIL PROTECTED] wrote:

for the second time an XP machine crashed while saving a subvi of a library
and corrupted the whole library. 
At one point I believed that a llb file was only a list to point to
other subvi's but it really seems that is not the case. I was working
on a new subvi that was saved into the llb when the corruption occured.
I only lost the work I had done today, and am currently waiting on our
support people to restore the file from a backup of a couple months ago.  
Any ideas how to get at this work? 

Well, depending on when the problem occurred during the save, there may be
a little chance to retrieve some of the work. If LabVIEW however can't browse
into the LLB anymore it has to be considered total loss. NI might be able to
retrieve some of the information but this is a laborous job and they have a
lot of other things to do than putting their developers up with such a task.

Considering the loss of only one day of work anything else but trying to
retrieve the data file for file by trying to open them each in LabVIEW will
simply cost you more than trying to recreate it.

I do think your computer may have some serious problem either hardware or
software related (bad RAM, HD or corrupt LabVIEW or Windows installation).
The way LabVIEW saves LLBs should almost completely prevent such things as
it creates a temporary copy of the LLB in your temp folder modifies everything
in there and when no error occured, the temporary file will be moved to replace
the original file.

A few suggestions: During development don't use LLBs. They are a complete
nodo if you ever want to use source code control, have little use nowadays
with all OSes allowing for at least 32 character filenames (MacOS only) or
usually 255.
They are a remainder of the times when LabVIEW had to run on Windows 3.1
with its nice 8+3 character filename limitation. Without that they would
have never been invented.
Individual file loading is sometimes and saving generally faster than using
LLBs depending on the LLB size. And the little compression you gain in an LLB
compared to individual files is no reason with GB HD space being bought by
the single dollars nowadays.
Creating subdirectories inside your project is much better, less error prone,
and allows you to manage and supervise the source code of your application
in the standard OS file browsers if you need to.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





Re: Keyence and LabView

2004-05-07 Thread Rolf Kalbermatter
Gokul [EMAIL PROTECTED] wrote:

I am trying to acquire data using LabView from KV16AR
PLCs data memory. A program in PLC is going to dump
data in PLCs data memory and LabView is going to
simultaneously acquire it. The transmission is over
serial port and I have done serial port acquisition
before with temperature controllers. 

I felt both type of acquisition are the same...but
still wanted to ask... Is this any different? ... I
have never worked with PLCs before..thats why wanted
to know.  

Well serial programming is serial programming! But then
no device or protocol is really the same. RS-232 means
recommended standard and is not always followed to the
word but even then only really specifies the electrical
chararacteristics.

The protocol depends on the the more or hopefully less
warped mind of the original developer. As long as it is
decently documented there is always a way to make it work,
serious bugs in the device firmware excluded. If you have
working C or even Basic sample code that usually helps too
to understand what the protocols intimate details are.

There is no reason to assume that PLCs are worse or better
in their design of the cummunication protocol than any other
device, other than there might have been a tendency in the
past to cook up its own protocol to make sure they can sell
their own programming tools and peripheral devices.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





Re: Labview 7.1 released 5/4/2004

2004-05-07 Thread Rolf Kalbermatter
andrew johnson [EMAIL PROTECTED] wrote:

Is there yet a list of labview bug fixes that 7.1 incorporates?

NI doesn't release such a list.

Also, has anyone been able to find information on what, if anything, 
has changed in DSC? There is a new version of DSC as well, 7.1, but 
there doesn't seem to be a listing of new or modified features.

No modifications or new features. Just incorporating all the bug fixes
they found. Accordingly the DSC Toolkit is a free upgrade if you upgrade
from the 7.0 version.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





Re: LV for Linux and TCP/IP

2004-05-07 Thread Rolf Kalbermatter
Bookwalter, Dan [EMAIL PROTECTED] wrote:

Not having seen the Linux version, so this is probably a dumb question , but
,does it have the same TCP/IP support as the other platforms ?? any other
things I need to be aware of

TCP/IP support: Yes.

Well Linux works in a few ways different than Windows so your habits
(shortcuts, etc) may have to be modified slightly. And hardware support
other than TCP/IP and serial is more of a hassle, although it seems
to start to improve lately.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





RE: More then one active Vi with open Front panel?

2004-05-06 Thread Rolf Kalbermatter
Michael Aivaliotis [EMAIL PROTECTED] wrote:

 My problem is I enjoy discussions on methodology and implementation
 techniques too much. In our local LabVIEW user group meetings we
 sometimes (actualy always) discuss (actually argue) programming style.
 The issue lately is: single loop good, multiple loops bad.

Are you kidding? Most complex programs I write never would fullfill user
expectations without multiple loops in the main VI. You don't want to
block alarm handling or built in TCP/IP server handling for remote control
because a user wants to check the configuration settings.
Of course making sure the local user and the remote application are properly
locking out each other is another thing, but quite some remote commands do
not need local lockout to be performed at all.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





Re: NI-VISA Serial

2004-05-06 Thread Rolf Kalbermatter
Kieta, Stanley [EMAIL PROTECTED] wrote:

1. Is there an easy way to include VISA with the Application Installer so the
user only needs to run 1 install program?

In LabVIEW 7 Application Builder go to the Installer Settings Tab-Advanced Button and 
select Serial Port Support. This includes a minimal version of NI-VISA for serial 
support. No MAX is installed though. 

2. I need the application to automatically find the comm ports on each user's PC. 
The VISA resource name control on my front panel lists these ports using the
ASRL1::INSTR format.  Is there any way to get the control to list the conventional
COM1 notation? I tried copying the visaconf.ini into the same directory as the
executable but that did not work.

The visaconf.ini file belongs into C:/VXIPNP/OSVer/NIVisa. Replacing this with one
from another computer is probably not such a great idea. You could however look in your
file in the section [ALIAS] and make sure you add the missing information to the file
on the new computer for instance during startup of your application.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





Re: Records in Datalog files?

2004-05-05 Thread Rolf Kalbermatter
Smith, Mark E [EMAIL PROTECTED] wrote:

A user asked the maximum number of records one can store in a datalog
file - I don't know.  I suspect it relates to some integer counter and
would be 256 or more likely 65,000 or so.  Anybody know offhand?

Most probably 2^31-1 as LabVIEW uses internally all 32 bit integer values
for such things. Of course seeing that a record would usually use several
10 or more bytes at least this amounts to a single file wich could easily
fill up even the biggest HDs nowadays. And so I think you could rather get
in trouble if the file as total gets close to 2GB or 4GB.

Using that big single files however does not seem like a good idea at all
anyhow, as the chance of file corruption it directly correlated to its size.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





RE: Watlow 96 Controller

2004-04-24 Thread Rolf Kalbermatter
Eric Scott [EMAIL PROTECTED] wrote:

 I have a Modbus 8RTU CRC generator if needed. Let me know if
 interested. I don't know if it will work with Modbus 7 ASCII,
 as I have never used that protocol.

Modbus RTU and ASCII use different CRC calculations. RTU is a so called LRC-8 
algorithme and ASCII is the well known CRC-16 algo.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 





RE: copy data from a front panel table

2004-04-20 Thread Rolf Kalbermatter
Himes, Allen [EMAIL PROTECTED] wrote:

Thanks to the guys that offered suggestions.  I forgot to mention that
the application from where I need to grab data is a .exe.  The methods
suggested only worked (or I couldn't make them work, anyway) for a
regular VI and not an .exe.  What I initially envisioned was something
as simple as selecting the data from the Front Panel with the mouse and
pasting that into a spreadsheet somewhere.

A LabVIEW executable can also have VI server enabled. You just set some
settings in its ini file to allow VI server access.
If the executable is not a LabVIEW app, then I'm afraid we do not know
enough here about your application yet to even have a slight idea how you could do 
that.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re (2) : appbuilder

2004-04-20 Thread Rolf Kalbermatter
Infos - Bien Entendu [EMAIL PROTECTED]

this is really unfair from national as VIs can migrate from one platform
to another and embedding VIs in multiples standalone engine (one for
each platform) should be quite easy to do.

Well it has economical as well as technical reasons. First integrating all
possible cross-compile targets into each LabVIEW version would be technically
possible but would exponentially increase the effort needed to test each new
LabVIEW version before release.
Second NI has IMO a point in saying if you want to do that you need to have
an additional LabVIEW license for the target plattform. Is that unfair? I
don't think so, it is a decision they made and all you can do is reason with
them to change that, but that is it.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




RE: Windows Security Permissions

2004-04-20 Thread Rolf Kalbermatter
Jed Davidow [mailto:[EMAIL PROTECTED] wrote:

 Is there a way to set windows permissions thru labview on 
 XP/2000?  I'm not talking about the simple rwx file flags,
 but actual windowsNT security settings.
 
 Has anyone written a library that uses the SDK, or is there
 a command line hack/script out there?

Well the Windows security API is so involved with so many complex
data types, that trying to access that directly from within LabVIEW
is a total no-no to do, unless you have months and months of time.

The API is also so involved that depending on what you want to change
you may spend months to get it done properly even when writing an
intermediate DLL to call from LabVIEW.

Your best bet is to try to find an Active X control (shudder) which
gives you the functionality you want to modify and try to call that
from LabVIEW.

 I want to change read/write/execute, possibly owner/creator 
 permisions for different users, and possibly inheritance options
 on files and folders.

This seems to be feasable to do and probably can be done in a few
days of C hacking. But still the API to deal with ACLs and such to
get the job done is not something you easily call from within LabVIEW.

Another option in this case is probably to use the command line to
call some Windows command line tool to change the settings you want.
Not sure which one that would be though.

Rolf Kalbermatter





RE: Re (2) : appbuilder

2004-04-20 Thread Rolf Kalbermatter
[EMAIL PROTECTED]; Forum LabView

 i agree that building a cross compiler wouldn't be so easy to 
 do and not a profitable option
 But having to pay for a whole development system is really 
 expensive as
 you already bought it for another platform
 
 why not just selling an application builder as vi are able to migrate
 between platforms (ok with some exceptions, but i won't 
 digress on this point) this should be more ... fair

The problem really is that the application builder only is the linker
if you compare it to a traditional compiler/linker environment. The
compiler part is built into LabVIEW itself, as the compiled code is necessary 
independant if you execute your VI in LabVIEW or as executable.

Eventhough NI has of course a PPC code generator for their Macintosh
version of LabVIEW, making sure that that generator works correct if
cross compiled for lets say the x86 target is a very difficult thing
to do. The least problematic here would be probably between Windows
and Linux as they use basically the same machine code generator.

But crossing the CPU boarder is a very tough thing to do. Maybe in
LabVIEW 9.0 as Super Professional Developer System ;-)

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




RE: This is simple???

2004-04-14 Thread Rolf Kalbermatter
Seifert, George [EMAIL PROTECTED] wrote:

I thought I was taking the easy way out by using available drivers with all
the error handling built in. The setup on some GPIB instruments is not trivial
and it's nice to have the examples supplied with the drivers. It still seems
somewhat of a toss up to me as to whether I should write my own drivers or
live with all the installation hassles. I just don't understand why the installer
isn't smart enough to install the run-time version of all the necessary drivers.

Because it is hard for software to be smarter than humans ;-)
Honestly, automatic detection alone of which components to include is almost
an impossible task, deciding how much of each to include or not, in which
order, with what dependencies, for Win9x/NT/2K/XP etc etc, makes this
basically a harder job to design than the entire LabVIEW application
is to write.
And then with each new driver version you are again back to the start
and can try to figure it out yourself. Why do you think the new Microsoft
installer is so difficult to use once it didn't work as planned? Because
it is a software package which tries to do all of this, except the
component detection, which you have to manually tell it. One small thing
going wrong and you are stuck with partial installations not being possible
to resume or remove since the installer does not know where to start with
both. Then you can try to do a registry clean, some MSI problem solvers and
more esoteric magic and hope you get back to the point where you can do a
new full install.

And lets assume the installer says hey I need
1) LabVIEW runtime
2) several LabVIEW support components

Notice that the application builder does these first two steps already as
it is from LabVIEW for LabVIEW and threfore can quite easily know what LabVIEW
needs.

3) VISA (VISA 3.0 full runtime 40MB, you only need serial, for instance)
4) IVI (fill in your MB)
5) GPIB (Enet?, PCI?, USB?, 1394?, classic?)
6) NI-DAQ for the little DIO line you used

Jeez, my application is a simple 10 VI application and the installer uses more
than 100 MB, are those guys from NI crazy?

Last but not least about uninstallation. You update NI-VISA to 3.0.1 to fix a
problem. Uninstallation of the application will not be able to uninstall
NI-VISA 3.0.1 as well as it was not part of the original install!!

Basically as much as you would like a smarter application builder. It is a
project the developer who decides to try an attempt at it only can get
frustrated with. There are definitely an endless number of other possibilities
where someone can get at least a small chance to create something which will
work in the end.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: Ancient Version question

2004-04-14 Thread Rolf Kalbermatter
Smith, Blair [EMAIL PROTECTED] wrote:

We have a LabVIEW program under strict revision control.  Presently, the
code is at the LabVIEW Rev. 5.1 level.  We are about to make changes (minor)
to that code.  Version upgrades are way beyond scope.  My preferred editing
machine is at the 5.1.1f level, due to other-project upgrades.  Has anyone
installed 5.1 on a machine already at the 5.1.1 level, and had success?
Many have reported that 5x and 6x coexist peacefully, but will a 5.1 install
exist happily alongside a 5.1.1 install in the same machine, or will I have
to uninstall the whole 5.1.1 package, then reinstall 5.1?

Well this can only be a medical or pharmaceutical application ;-).

If you install the 5.1 into a separate directory it should most probably
work to have 5.1 and 5.1.1 on the same machine. LabVIEW is even in 7.0 still
quite independant of a lot of components and before 6.0 this was almost
entirely true.
Having worked with 5.1 and 5.1.1 I would however consider 5.1 a not so nice
experience. Be prepared to save fast and frequent or you will loose some
considerable edit work.
But maybe that is better than having to fully audit the entire application
again.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




RE: Consut in spanish

2004-04-14 Thread Rolf Kalbermatter
Daniel wrote:
I am sorry bu this question have to be made in Spanish. If somebody can
read it but cannot answer it in Spanish does not matter, I will take the
answer in whatever language is provided.
I am writing a program in which access to different screens through
different options. The screen change is done through state-machines, closing
the origin VI and opening the VI at which I enter. The problem ins on the
screen changes, there is an intermediate state of flickering and many times
it is visible in the Windows desktop.
 The question is: How it is possible to eliminate such flickering? In
another words, Is it possible that the origin VI stays visible until the
arrival Vi finishes opening itself?

There will be always some sort of flickering, I think. However to avoid that
there is no window for a moment, you will have to get LabVIEW to do what you
want by ensuring your program behaves accordingly.

I usually let each VI open its front panel itself. The caller then Opens the
VI reference, runs the VI and waits in a loop with a reasonable timeout for
the subVI to have its front panel Open property to get true. Once that happened
I continue with closing the callers front panel.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: CIN code

2004-04-13 Thread Rolf Kalbermatter
Kalyanramu Vemishetty [EMAIL PROTECTED] wrote:

I am starting to learn about CIN codes.

I downloaded a program from NI website which had a CIN.

How can I get the code  written in CIN.

If you know the author of the CIN you might ask him if he
wants to give you the C source code. A CIN is a compiled
and binary object file without any direct reference to the
source code.

The only other possibility like disassembling is illegal
at least in the US.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: Regional settings date format

2004-04-13 Thread Rolf Kalbermatter
John Brohan [EMAIL PROTECTED] wrote:

I am having a struggle with date formats and LabSQL. One big problem is the
regional preferences date format. If I can read the setting then I can
generalize the LabSQL date treatment.

Does anyone know how to read the regional settings for the date format?

You will have to call the Windows API. (well maybe it's in the registry
actually but I only know about the Windows API. Chances are that even if
it is actually in the registry it may be at different locations for the
different versions of Windows.)

Checkout the description on MSDN (msdn.microsoft.com) and what tokens
you can use.

int GetProfileInt(intl, CStr token, int default);
GetProfileString(intl, CStr token, CStr default, CStr buffer, int bufLen);

One warning though. The information you get from there is not really much
better than what you can get from smart parsing of the localized LabVIEW
string. 

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




RE: SubVI Connection Description is the Caption?

2004-04-09 Thread Rolf Kalbermatter
[EMAIL PROTECTED] wrote:

When I created a VI recently, I noticed that a control's label in the
context help did not match the control's label.  I had copied the control
from another VI and didn't realize that it had a caption set different than
the label (only the label was visible).  When I wired the VI as a SubVI,
the terminal showed a different description than the label.  After a few
moments of confusion, I realized that it was using the caption.

Shouldn't a control's label be used as the connector label or am I crazy? I
guess there is still new things to learn about LabVIEW even after using it
for 11 years.

It is a little more tricky. When you are in edit mode, and hover with the
mouse over a control you will see in the context help window both the label
and caption if they are set, one of them being in []. In run mode however
you will only see the caption if it is defined at all and otherwise the
label. I find that a nice feature.

As to the floating tip showing up when you wire I thought this would be the
label actually. Never noticed what you seem to say, but I never really checked
that one as I'm only concerned about captions for VIs with visible front panel.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: Pointer to Pointer from DLL

2004-04-08 Thread Rolf Kalbermatter
Marco Tedaldi [EMAIL PROTECTED] wrote:

Now there is a Function to read and write the Contents of the EEProm on 
the Device. This DLL-Call wants a struct with some values in it. some 
DWords, some Booleans... no problem. But It also wants 4 pointers to 
strings.

As the only possibility to simulate a struct is creating a array of U8 
in the right size and type-cast it to a cluster or pull the data out 
myselfe, I'd get four pointers. This means I've got 4 32-Bit numbers 
which describe the memory-location where to find the strings. But how do 
I get the string in Labview?

How is life back in Switzerland?

Well, it really depends. If the DLL is allocating them and just returning
them to you you can read them with a little trick directly in LabVIEW.

Create a Call Library Node. Configure it to use the library LabVIEW,
calling the function MoveBlock with C calling convention and following
parameters:
return value : void
1) src: uInt32, by value, the 32 bit pointer from the struct
2) dest: C string or uInt8 array, depending if it is text only with zero
   termination in the buffer or binary data 
3) len: int32, by value, the number of bytes to copy

Preallocate a long enough buffer for the string or array in the LabVIEW
diagram BEFORE wiring it to the input of the Call Library Node. this is
best done with Intialize Array to create an uInt8 array and eventually
converting it to a string with the Byte Array to String node.

However there is the problem about who is gonna deallocate those buffers
again? If the caller is supposed to do that without calling a function of
the DLL itself you will have to find out, which API you can use for that.
Ideally the DLL would have a specific function to which you pass the entire
struct so that it will deallocate all embedded pointers properly.
More likely though, does the function expect you to allocate those buffers
BEFORE passing it to the DLL function and deallocating it afterwards.

You could do that on LabVIEW diagram level similar to the MoveBlock solution
above, calling DSNewPtr() and DSDisposePtr(), two other documented LabVIEW
Code Interface Manager functions (checkout the Using External Code in
LabVIEW Manual in the online documentation for their prototype definition),
you can call in this way through the Call Library Node (this feature is not
really documented anywhere but it works since about LabVIEW 5.0 without
problems on all official LabVIEW platforms, which is the reason it those
functions use C calling conventionand, and it's used in some of the VIs by
NI themselves).
However this gets soon almost unmanageable on LabVIEW level and if you have
the possibility to create your own wrapper DLL I would certainly go for that
solution.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: Details on use of Device Control/Status.vi

2004-04-08 Thread Rolf Kalbermatter
Brooke Clarke [EMAIL PROTECTED] wrote:

I'm writing an application where I need to be able to bit bang serial 
port output lines and check the status of serial port input lines in LV 5.1.

In the vi.lib\instr\_sersup.llb folder there is the serial line 
control.vi and it uses two sub VIs to allow controlling the DTR and RTS 
lines by using the open serial driver.vi which can be opened and 
figured out plus the Device Control/Status.vi which is a yellow vi and 
can not be opened.

I have the feeling that with the proper inputs to Device 
Control/Status.vi it would be possible to read the status of the serial 
port input lines CTS and DCD.

Does anyone have details on using the Device Control/Status.vi?

You must be using LabVIEW before 7.0. The new 7.0 serial VIs use VISA to
control the serial port and VISA coincidentially allows you to controll
all those handshake lines over properties of a VISA resource.

The Device Control/Status node is not a VI but a built in LabVIEW node
which is basically obsolute in LabVIEW 7.0. It was a direct interface to
the Macintosh Device Manager interface together with Device Open, Device Read
and Device Write nodes. You could theoretically control any device driver
under Mac OS with these nodes directly, altough doing so could get a very
tedious exercise, as the interface was rather low level compared to the rest
of the LabVIEW functionality.
Under Windows NI created a Macintosh Device Driver emulation layer for the
serial port, which was located in the omnious serpdrv file. No other device
emulation files where ever released. The io control codes into the serpdrv
file are not public and I actually doubt there are any others, besides of
the ones available in the _sersup.llb file, although this driver evolved
over time having no real additional control options in LabVIEW 3 up to
allowing some handshake control in LabVIEW 5.

Don't bother about this interface at all. It's not worth the hassle, will
most probably go away in LabVIEW 8 or 9 and VISA offers you a more complete
control of serial port aspects in a much more platform independant way.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: OpenG Coding Challenge - Remove Backspace

2004-04-08 Thread Rolf Kalbermatter
Craig Graham [EMAIL PROTECTED] wrote:

Apparently the tests are being done reentrant, but not with the VIs set to
subroutine. Both changes give the same doubling in performance- I assume
from ditching debug and UI code internally.

Does a similar speedup from ditching debug code occur naturally in a built
app for VIs that are not reentrant? I've never noticed, but I've never
bothered looking at the effects of things like this before.

No, removing debugging only, will result in a gain of a few % execution speed
at most.

What speeds up things is, that for time critical or subroutine priority VIs,
LabVIEW not only disables front panel updates (done anyway if the panel is
not open), but even the code necessary to buffer the data for front panel
display. A bitch to debug such a VI, (you really can't ;-) but a nice solution
for a small atomic function, which has been throughly debugged beforehand.
Don't do it for long executing functions though. LabVIEWs priority scheme
is very strict and as long as a high priority clump (subVI) is executing
in an execution system, little else will happen in that execution system,
although in LabVIEW 7 this is a little aleviated through the default allocation
of multiple OS threads per execution system.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: Video capture cards, LV and Linux

2004-04-08 Thread Rolf Kalbermatter
Steven Hammer [EMAIL PROTECTED] wrote:

Does anyone out there have any experience with a video capture card that 
can be used with LabVIEW (with IMAQ) and Linux?

Any recommendations based on experience with any good cards?

The main hassle (I'm sure - I'm a relative Linux newbie) will be setting 
up the card to use Linux. However, using it with LV will no doubt have 
its own troubles. Any tips on what to look out for with LV?

I think you will be disappointed to know that IMAQ is not supported under
Linux. It currently only runs under Windows OSes at all.

You still can read in pictures and display them in the Picture Control but
IMAQ functionality will not be available under Linux or any other non-Windows
LabVIEW version.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




RE: OpenG Coding Challenge - Remove Backspace

2004-04-08 Thread Rolf Kalbermatter
Craig Graham [mailto:[EMAIL PROTECTED] wrote:

 Rolf Kalbermatter wrote:
  What speeds up things is, that for time critical or subroutine
  priority VIs, LabVIEW not only disables front panel updates (done
  anyway if the panel is
  not open), but even the code necessary to buffer the data for front
  panel display. A bitch to debug such a VI, (you really can't ;-) but
 
 This can't be all there is to it though. Execution times (ms) for
 consecutive tests of a sub-VI;
 
 29.6, 29.3Normal priority, not reentrant, panel open.
 25.9, 26.4Normal priority, not reentrant, panel closed.
 9.6, 9.6Normal priority, reentrant, panel closed.
 
 Going from non-reentrant to reentrant more than doubles the 
 performance. Simply closing the panel to disable GUI code gives a 
 comparitively slight improvement. Note that all VIs were saved to
 disk before running the tests.

An important remark, a otherwise your timings will be rather off.

 Is the data copied to the front panel even if the front panel 
 is not open, and the update code to copy from the buffers to the
 screen is the only thing disabled when a VI is closed? It does
 not seem likely, since if you open a subVI after it has executed,
 the front panel does not contain the data loaded in to it during
 execution.

Well that is indeed an indication that it is not copying data on closed
front panels but I'm not entirely sure. Removing debugging only however
will result in even less performance increase. So it can't be about
debugging at all. It maybe has to do about the dataspace preparation
then. For non reetrant VIs the data space is shared among all instances
and probably needs to be initialized somehow each time the VI is called.
Reentrant VIs maintain each their own dataspace and need no additional
initialization before each call. Just a guess really!

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




RE: Video capture cards, LV and Linux

2004-04-08 Thread Rolf Kalbermatter
Steven Hammer [mailto:[EMAIL PROTECTED] wrote:

 Curses. No IMAQ in LV for Linux.
 
 That aside, I can still think of a use for a video capture card that 
 works in LV/IMAQ under Windows AND is useable with Linux (without 
 LabVIEW or IMAQ). Any recommendations for a card that fits both 
 requirements?

I have no experience with image acquisition HW under Linux but you could
try following links:

http://www.arvoo.com
http://www.euresys.com

They certainly should be usable from within LabVIEW using the Call Library Node to 
call directly into the driver API for the cards.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: Details on use of Device Control/Status.vi

2004-04-08 Thread Rolf Kalbermatter
Brooke Clarke [EMAIL PROTECTED] wrote:

I have a very simple sequence with frame 1 having a single VISA Property 
Node with two elements that sets RTS and DTR.

Frame 2 has a 3 second wait.

Frame 3 has another VISA Property Node with 4 elements to read the 
status of CTS, DCD, RTS and DTR.

When I run this I can see DTR change for 3 seconds then change back to 
where it started.

When the VI stops the VISA resource is deallocated and closed and that
will probably reset the serial port to inactive state. Just add a 10
second delay in a frame 4 to see if that is the reason.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: OpenG Coding Challenge - Remove Backspace

2004-04-08 Thread Rolf Kalbermatter
Alex Le Dain [EMAIL PROTECTED] wrote:

Just be careful. The penalty of reentrancy is memory. Marking all your VI's reentrant
might give some increased processing speed but as each instance of the VI has it's own
memory space, so depending on what you're doing making all your VI's reentrant can
cripple you with memory requirements (especially with low RAM systems like FP 
controllers).

I would also think that re-entrancy here is only for testing as near to optimum,
presumably building a VI into an EXE would achive the same increase in performance as
is achieved with reentrant=TRUE, as the UI and debug code are removed then. Is that
correct Jim?

No I don't think so. Unless NI made some significant changes in LabVIEW 7.0. building
an application is just about the same performance gain as disabling the debugging flag.
I have older figures but this resulted in LabVIEW 5 and 6 in gains of a few % and is
therefore bound to get almost lost in the noise of the measurement accuracy. 

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: Save with 24 bits

2004-03-30 Thread Rolf Kalbermatter
Infos - Bien Entendu [EMAIL PROTECTED]

I am searching for a way to save 24 bits values on an AIFF file
i thought that keeping the 3 first bytes of an I32 should make it 
but it didn't work
could anyone give me some clue ?

Since LabVIEW streams data always in big endian format (flatten
to string, typecast or stream to disk) you will probably have to
actually keep the last 3 bytes instead of the first 3.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: Save with 24 bits

2004-03-30 Thread Rolf Kalbermatter
Infos - Bien Entendu [EMAIL PROTECTED] wrote:

i tried to split the bytes with the split number function (and just
taking the 3 first one) and it worked !
i don't know why i couldn't make it with concatenating bytes after a text
conversion which should be equivalent ???

No, it isn't! LabVIEWs flattened (typecast is also flattened) stream format
is ALWAYS in big endian (aka most significant byte first). So just take the
last three bytes instead of the first three of a four byte flattened I32
and make sure you append the bytes right.

Just because Intel x86 uses little endian does not mean that is the right
thing to do and LabVIEW as a multiplattform system had to standardize on
something and they choose for the standard of almost all other CPUs including
the one LabVIEW originated from and this is also the prefered network byte
ordering for binary internet protocols.

By the way I believe the Flatten Pixmap.vi, a subVI of Draw Unflattened Pixmap.vi
in LabbVIEW 7 does more or less just what you need.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




RE: join synchronization methods

2004-03-25 Thread Rolf Kalbermatter
SPRINTER [EMAIL PROTECTED] wrote:

May be someone knows the way to join different synchronization methods ?

In other words I would like to use the queue and occurence with one while
loop. Desired effect is that while loop waits for one of these events
(element in queue or occurence).
In my case, dequeue function was being blocked by the wait occurence
function.

Event controlled objects in a data flow language will have to wait for both
objects to return before the code can continue. Not much to change about that.

You have two options.

1) Separate those two in different loops. Let them do whatever they need to
do. Eventually communicate the event from one loop (occurrence) to the other
loop (queue) if the code to perform for the occurrence is the same as for one
of your queue events.

2) Even better: instead of using an occurrence generate directly the according
queue command element in the place you would normally trigger the occurrence.
Or add a specific dummy occur queue command to send to the queue from where
you want to trigger the occurrence. Queues really are just very sophisticated
occurrences with queueing of the events as well as data storage with each event.
So there is no problem in using a queue as an occurrence at all here other than
the queue being a little more expensive in resource terms but as it is there
already anyhow you really loose nothing. 

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: Waveform graph legend

2004-03-25 Thread Rolf Kalbermatter
[EMAIL PROTECTED] wrote:

I have waveform graph of 48 plots.  The first 16 plots are the actual data.
The last 32 plots are the upper and lower limits for the first 16 plots.

I have the legend visible and only expanded to for the actual data (first
16 plots).  So there are only 16 plots in the legend.  If the color one of
the plots in the legend changes the color of the limits needs to change
accordingly.  I do this by using a property node (set active plot, plot color).
But since legend is not expanded to include the limits an error 1077 Invalid
property type is given.  If all 48 plots are included in the legend I do not
get the error.  However I would prefer to not display all 48 plots in the
legend.

Has anyone else seen this?  What are some ways to work around this?

The legend or the plots defines what plot channels are available. So if you
want to be able to change plot attributes you need to either make sure there
is according data for that plot or the legend is expanded to show that channel.

It is as simple as writing a single NaN value for each channel you want to be
able to change plot attributes for.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: [W] Data link Serial port

2004-03-24 Thread Rolf Kalbermatter
R. Glenn Givens [EMAIL PROTECTED] wrote:

I am writing a data link to communicate between a PC running LV 6.0.2 and an
industrial computer. The link on RS232C will run @ 9600 baud and the
throughput is such that I'm using 1/4 to 1/3 of capacity. The industrial
computer sends the data and has a 500 byte buffer each for send and receive.
But the data is all one way from the industrial computer to the PC. The PC
will save this data to disk continuously and also present displays etc.

It sounds like there are many ways to operate the serial port in LV (maybe
VISA, maybe direct port manipulation, more likely old-fashioned serial port
VI's?. I haven't done much checking but I wondered what the consensus was
for the best method for this type of application. Since PC's only have a
buffer of a few bytes, I would prefer to have something automatically handle
the interrupts and have a big input buffer that I just read from time to
time (say every 50 to 100 msec). I used to do that interrupt handler stuff
in assembler in the DOS days.

VISA does all the buffering for you. You just open a session and with 9600
baud you shouldn't even need to configure special buffer sizes. I use
VISA with up to 48kBaud regularly without any problems and have also
managed with quite some ease 115kBaud on VISA. There is really no need to
do direct register programming and interrupt handling on application level
anymore for standard serial port communication. VISA and similar interfaces
neatly deal with this for you and you do absolutely not need to worry about
that. It may be hard to get used to this if you have previously dealt with
these thngs yourself but believe me it is a big relieve.

The serial port problems are seldon on the VISA (or whatever API you use)
level but most often on the actual instrument-application communication
level, due to funny (not to say stupid) protocol conventions and most often
bad documentation.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




RE: [W] Sentinel SupePro Automatic Protection

2004-03-23 Thread Rolf Kalbermatter
Junginger, Johann [EMAIL PROTECTED]

I had problems like that once using a similar hardware security key. The
problem as I understand it is that a LabVIEW executable is not a 'true'
executable (people who know better are invited to clarify). My solution was
to use calls to the key from within my (LabVIEW) code to check for the
presence of the key. 

Huuu? It is certainly an executable in the sense of the underlying OS. It has
a little special layout but a wrapping software not able to handle that is in
my opinion flawed.

A LabVIEW executable consists of a stub executable and the VIs compressed into
an LLB (LabVIEW VI Library) tacked to the end of this.

The stub executable is a real executable impersonating the process and its
context and then calling into the LabVIEW runtime DLL to actually do the real
work, such as loading the compiled VI resources from the attached LLB,
executing them, drawing on the screen and all that.

What most probably happens here is that the Sentinel wrapper wraps the LabVIEW
executable just fine but when extracting it, only extracts the actual stub
executable based on its size recorded in the execution header and then the
LabVIEW stub being invoked can't find the attached LLB and aborts with the
mentioned message.

Not sure if Sentinel could solve this easily as extracting the entire executable
properly may not work in memory only and then it would have to actually write a
temporary file to disk which would allow to circumvent the protection this wrapper
is supposed to give.

One way NI could solve that, would be to add the LLB as binary resource to the
executable itself and load it from there, but that may have quite some speed
considerations.

Of course implementing the calls to the Sentinel directly in LabVIEW is the
better way anyhow and at least for the Sentinel Pro there seems to be a VI
library which allows to do just that. If calling into a DLL is not your thing
you can also call their command line tool to query the key.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]




Re: exception error only w/ an .exe built from LabVIEW

2004-03-23 Thread Rolf Kalbermatter
Himes, Allen [EMAIL PROTECTED] wrote:

LabVIEW:  An exception occurred within the external code called by
a Call Library Node.  This might have corrupted LabVIEW's memory.  Save
any work to a  new location and restart LabVIEW.

VI Jitter Test with Temp.vi was stopped at node 0x410 of subVI
tktds7k Set Measurement Source.vi.

Well seems like the Tektronix TDS 7000 driver has some problem here which
might manifest itself only in the runtime system. Looking at the IVI C
source of that driver might help you if you understand C.
 
I just realized that my development machine is running Windoze2k and the
target machine is running WinXP.  Could that have something to do with
it?  When I ran the .exe file on my development machine, it ran through
the whole test without any problems.  I left it over the weekend and
when I came in, it had finished without generating this error.  What
could be different between the two computers?

Unlikely but with so many software (DLLs, drivers, LabVIEW executable,
VISA, etc.) you never know. Try at least to make sure you install the
latest VISA runtime from the NI site.

Also, sometimes, the .exe file won't even start on the target machine
because there are communication errors with my scope and other system
components.  They vary between these messages:

*  tktds7k Initialize.viERRDriver Status:  (Hex 0xBFFF0039) Unable to
queue the asynchronous operation because there is already an operation
in progress.
*  tktds7k Initialize.viERRDriver Status:  (Hex 0xBFFF0078) The value
of some parameter (which parameter is not known) is invalid.

But resource closing of a previous run might be the reason for this. If
the previous run aborted with an exception that is probably the reason.
VISA sometimes maintains asynchronous state information over process
restarts if the VISA DLL wasn't properly unloaded.

And finally, whenever the .exe application is closed, there is an error
that an instruction at location 0x7343FF3F could not read the memory
at the same location.  This particular error happens on either the
target machine or the development machine.

Huu. That sounds strange but from the address is unlikely to be really
in executable code but rather a mixup of a data pointer which managed
to get into the execution chain. Either a stack corruption or an improperly
set cleanup procedure callback pointer. Of course one of the DLLs you are
using, including your USB interface may have scribbled over some address
space it wasn't supposed to touch. And just because you don't have such
an error when calling your DLL from another application than LabVIEW
really means nothing at all.
 
Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




RE: Alternative info-LabVIEW list

2004-03-23 Thread Rolf Kalbermatter
Kevin Valentine [EMAIL PROTECTED] wrote:

I'm not a big fan of mailing lists for a number of reasons.   
Searching their contents is never a trivial task.  Threaded  
topics are easily broken.  They get mail bombed, spammed,  
etc.

I on the other hand prefer mailing lists. I don't want to
login to a news server every day to follow all the communication
and to wade through a long lists of unrelated posts.

Just the way a mailing list works accommodates my needs.
Info-LabVIEW has never really been spammed in the past thanks
to its outstanding maintainer Tom, a strict subscription policy,
and all those fine folks being part of this list. I hope this
continues in the future.

I would like to see Info-LabVIEW on a news (NNTP) server.   
The beauty of comp.lang.labview is that I can read and post  
from a news reader and search its archives with Google.  No  
need for a web interface.  I know, news groups can also get  
spammed and flame wars are more common, I just like them  
better.

If you want NNTP you can go to comp.lang.labview. Otherwise
Info-LabVIEW is the place to be and I prefer the last one.
Why setup yet another news service for LabVIEW which will
either have to be seriously moderated or get drown in all
sorts of spam and unrelated posts, including sex pill adds
and heavy flame wars. Moderating a news service however
seems almost like a full time job to me so chances are fat,
that it soon is fades away or gets opened up to unmoderated
operation and all its negative by-effects.

I was looking into how to set one of these up for LabVIEW  
discussions.  A group like alt.comp.lang.labview ... or  
something like that.

Again why this? Do you want to just mirror the discussion
on Info-LabVIEW on that? Or compete with comp.lang.labview?
The first might have some use but the second will be doomed
for sure as a lot of professional people can't afford to
monitor both comp.lang.labview AND Info-LabVIEW, not to
mention yet another news service.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




RE: Hiding Main VI's Window in built application

2004-03-09 Thread Rolf Kalbermatter
Yonatan Tidhar [EMAIL PROTECTED] wrote:

Yes I thought so, but way should the execution should be depend
Were the front panel is open or not? No meter if I leave open reference to the VI
I was expect that the vi execution will continue until the code ends.

Simply because it is way to easy to close the top level front panel (x in
top right corner of window for instance) and if an open reference, which
was forgotten by the programmer, would keep the VI alive, you would end up
with zombie processes. Basically it is protection of the (not so innocent)
average LabVIEW programmer, who does not want to deal with proper resource
deallocation once he is done with what he wanted to do.

You can work around this by keeping a top level VI which starts up all the
rests and gets neatly hidden by moving it off screen.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: API Serial Functions

2004-03-09 Thread Rolf Kalbermatter
C. Allen Weekley [EMAIL PROTECTED] wrote:

Has anyone written VI's to perform serial I/O using functions in
kernel32.dll?

Why would you want to do that? It's a pain in the ass, will only
work under Windows and be likely a lot of work before you even
can start communicating with your device. On the other hand using
VISA I communicate with all sorts of devices since days and years
and it has always worked for me, with the additional bonus that my
LabVIEW applications work under Linux just as much without rewriting
or even conditional VI code.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




RE: API Serial Functions

2004-03-09 Thread Rolf Kalbermatter
C. Allen Weekley [EMAIL PROTECTED] wrote:

Yes.  My company manufactures a variety of instruments that communicate
solely with serial.  We have been using NI-VISA and are happy with it,
but NI-VISA requires an installation that is often larger than the
application we supply to our customers, and there are licensing
restrictions for distribution. I was hoping to develop our own serial
functions in a much smaller package than NI-VISA.  This is why I am
interested in using the Windows API.  NI-VISA with its support for so
many kinds of instruments seems like overkill for our application. When
the customer installs the NI-VISA driver they end up installing features
to support GPIB etc. that are not needed for our application.

Well, I see. Just forget about my previous mail then, will you. I have
started some work on a VISA32.DLL replacement which I intend to release
as Open-G VISA library. It is far from complete and I have only worked
on Windows yet but intend to actually push the Linux version more. It is
structured similar as NI-VISA with VISA32.DLL as top level API and low
level plugin DLLs which provide support for particular VISA resource
types.

The first I have started with is serial of course, but it is as I said
not functional at all yet. I still need some common infrastructure
libraries as well.

I was intending to get something working before putting it out, but it
is geoing slow and it will take quite some time before I get there. If
you would be interested in working on that as well I could create an
Open-G subproject and put up what I have so far so that you could take
a look at it and work yourself on it as well.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]




Re: Language Independence on Serial Output of .

2004-03-09 Thread Rolf Kalbermatter
Holland, Guy [EMAIL PROTECTED] wrote:

I have an application that outputs a string with numbers via the serial
port.  Ex. SG1.23,4.56,7.89.  I developed the application in English
but when the application was installed onto a Non-English say Holland
language, the output string turns into SG1,23,4,56,7,89.  The periods
are replaced with commas.  I believe it might happen with all languages
that use commas ($1,00) rather than periods ($1.00).  How can I get around
this problem so that my application can be language independent?

You have a couple of options:

1) Change the International Settings in your Control Panel (will affaect
   all applications on that computer)
2) Change the LabVIEW global settings (disable Options-Misc-Use
   localized decimal point) (This will affect all string formatting
   functions in your application. You can also add the according setting
   useLocalizedDecimalPoint=False to your application INI file.)
3) Define with each formatting command what format to use by prepending following:
   %.; use decimal point
   %,; use decimal comma
 

This is great information.  I think I will have to use both yours
and Richard Mahoney's advice.  Your suggestion will work for the
mean time but I think the solution is to use Richard's format
specifier %.; suggestion. Actually, I think the ultimate solution
would be for NI to transparently do this formatting so I don't have
to. If I send a . why would I want it to be anything else other
than a .? 

Not sure what you mean. Mind reading software hasn't been developed yet ;-)

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]




CDR Writing with LabVIEW

2004-02-28 Thread Rolf Kalbermatter
Holland, Guy [EMAIL PROTECTED] wrote:

I don't remember if this topic was covered before but I am interested in writing data 
to a CD
programmatically from within LabVIEW.  Has anyone out there tried this?  Does it 
matter which
type of CDR it is or is there a common interface for all CDR's?I would like to 
check if
there is a blank CD in the drive, if so, write the data to CD and then open the drive 
and send
a message that tells the user it is complete.  I'm using LV7.0 on Win2k.

CIT Enginering has developed a LabVIEW library NeroVIEW for Windows to control the
Ahead Nero Burning Rom software directly from a LabVIEW program. It was developed
for Nero 5.5 but should work just as well in Nero 6.
http://www.citeng.com/pagesEN/products/cdrec.aspx

Alternatively Nero supports also a command line interface tool which you could call
with the System Exec function.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: Confounded and Ranting

2004-02-28 Thread Rolf Kalbermatter
George Gatling (Contractor) [EMAIL PROTECTED] wrote:

I am happily working my way through my current project when I suddenly 
realize about half of my application is in C:\Documents and Settings\...\Temp\

I have no idea how long I have been editing files in a garbage temp folder, 
not to mention which ones are linked to the temp files vs the real files. 
GRRR!!  Has anyone seen this behavior before?  How does an entire app get 
copied to temp AND then why does LabVIEW link half to temp and half to real?

What has happened here is that your target drive was full at some point.
LabVIEW saves its files into the temp directory before moving them to the
real target to prevent corruption of at least the original file due to memory,
system or disk space problems. 

If the target drive is full LabVIEW aborts the save operation but the files
saved to the temp directory stay and if you later save the application again,
the current temp copies can get the actual linked ones.

This is a problem in itself, but also underscores the absolute need for 
some sort of project environment.  I have been working with LabVIEW for a 
long time now and I still find myself baffled by labviews sneaky way of 
linking things together.  I realize that it is not easy, every function 
also being its own file, but come on!  Surely it would have been time well 
spent to clean up the linking situation rather than create a bunch of code 
cryptifing express vis!!

A project environment does not remedy the linking issue at hand at all. The
issue really is that LabVIEW uses dynamic linking, which has a lot of advantages.
The disadvantage is that if you are not careful you get into a version mess
better known under Windows as DLL hell. The solution MS has come up finally
to resolve that issue is to actually have each application install and use
its own DLL, as the application root directory is the first searched at (almost)
all times (no rule without exception of course). The underlying Windows loader
however will still pick up any DLL with the same name it can find, if the
application directory does not contain the desired DLL.

LabVIEWs approach is not that different and in fact quite well documented. It
stores the path to the VI in the caller, a relative path if it is on the same
volume and an absolute path if the volumes are different. When loading a VI
all its callees are first searched at that stored location and if not found
LabVIEW will start to work through its search paths configured in
Tools-Options. If not found you are given the chance to find it yourself
and if you are not careful in which directory you start in the File Dialog
you may actually point LabVIEW to the wrong version yourself. All complicated
and prone to get into errors? Yes but any other approach seems even more
inpractical to me.

The thing I always do, often in between for bigger projects, but always before
releasing, is using the Top Level VI which contains all the main and dynamic
VIs of the project, go to Save with Options, Save entire hierarchy,
New location - single prompt - preserve hierarchy, don't save VI.Lib and saving
it into a new and empty folder. You immediately see if you reference VIs outside
of the expected project directory, and also get rid of all cruft VIs not used
anymore in your project.

Then remedying any problems is basically making the undesired VIs unavailable to
LabVIEW by temporarily moving or renaming the top level hirarchy directory of
those undesired VIs and reloading the project again through the Top Level VI.
Now LabVIEW will prompt you for the files not found, so just point it to their
location or better yet adjust the search path in Tools-Options to let LabVIEW
find the real good files.

So often I have had two very similar projects and I just wanted to copy the
whole code pile and start from there.  But nearly EVERY time I do that I 
wind up accidentally editing one or two files in the old app because not 
all of the copied files linked correctly. 

Either your project is not located in one single hierarchy of files so that
you copy two different top level directories and LabVIEW remembering the
location of the original files will refer for files from the other than
Main VI directory hierarchy to the original ones again. Or you happen to
have those VIs in the clipboard so that LabVIEW picks them still in memory
(and their disk location, as it needs to know how to save VIs back to disk
on change) 

Of course I should have expected all of this because it is 9pm on a Friday 
and nearly all software bugs (aka features) seem to be able to sense this.

Hmm interesting theory, and as applications get more and more complex some
sort of intelligence could maybe develop ;-) But my experience is more that
I do those stupid things more likely on such a moment than early in the morning
when I'm still fresh.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288

Re: More than one executable

2004-02-28 Thread Rolf Kalbermatter
Allen Weekley [EMAIL PROTECTED]

Can a VI built into an executable call a VI that is located in another
executable or a LLB?  If so are there any special tricks in order to
give the calling VI the correct path?
 
Yes, BUT!

The first possibility is to just open a VI refrence to the VI on disk. For this
the path to the external VI would be basically appdir\appname.exe\myVi.vi

But as executables only contain the compiled code of the VI, the two applications
need to be LabVIEW version compatible. A LabVIEW 7.0 application can't load a
LabVIEW 6.1 application VI and vice versa.

Also linking may be an issue. LabVIEW applications always search for VIs in
its own internal VI library first before even attempting to look at other
places even if you furnish it a full path. So assuming of course the external
VI you want to call does not exist in its own VI library (only the VI name is
used for this check) you still could get into trouble if that external VI was
written to use a different version of a subVI which also exists by that name
in your first applications VI library. The application loader will load the
external VI and seeing that it calls a VI whose name exists in its local library
load the local one which might have functional or connector pane incompatibilities.

With functional incompatibilities you get wrong results and with connector pane
incompatibilities you get a broken VI and the Call By Reference function will
return an according error.

The second approach is to enable VI server in the other application. Then opening
an application reference to that application (you have to start it first somehow,
as the server is TCP/IP based which doesn't provide instance activation) you can
now work on VIs from a remote LabVIEW application just as if they would be local.

A few properties and methods are not allowed in remote operation for security reasons
and the VI server protocol is sort of version dependant. You can connect between
LabVIEW 5.0 and up to 6.1 applications as server and client mixed, but LabVIEW 7
servers can only be contacted by LabVIEW 7 clients. The opposite should work according
to the release nodes but I haven't tried that.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: [W] Getting IP address of a local network interface

2004-02-27 Thread Rolf Kalbermatter
Simon Whitaker [EMAIL PROTECTED] wrote:

Does anyone know of a way to get a list of the IP addresses associated
with network interfaces on a Windows PC running LabVIEW? I could call
IPCONFIG with a System Exec node and parse the output but that's a dirty
hack, so I'd rather not.

That's a very frequently used aproach under Unix and in fact works very
well there. the Windows way is to call WinAPI functions (in LabVIEW over
the Call Library Node) but it is not inherently better than what you have
in mind.

As long as you only want the default IP address of your computer you can
use the StringToIP node in LabVIEW leaving the input string unconnected. 

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H  fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: Windows XP Time Set Feature

2004-02-23 Thread Rolf Kalbermatter
Lewis Drake [EMAIL PROTECTED] wrote:

The problem was caused by a new XP feature that periodically updates the
system time clock automatically at scheduled times.  If you open the Windows
XP Time and Date Properties (right click on the computer's time display) you
will see a new tab there called Internet Time.  You may adjust the time
update schedule or disable updating there.  I'm posting this to alert you to
a potential to be a problem if you are writing any LabVIEW program that
assumes monotonic timestamps.

The Logos Time Service (part of Lookout and LabVIEW DSC) does a more intelligent
time adjustement by gradually adjusting the time of your system clock to prevent
non-monotonic jumps in your local time history.
Most modern and real Internet Time Service clients do that too or have at least
an option to enable that.

You don't need XP for that though. Earlier Windows versions can have that option
too, only you will probably need to use the command shell and the net time /setsntp
command to enable and configure the service. 

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H   fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: Reading file names from a directory

2004-02-19 Thread Rolf Kalbermatter
Lewandowski, James R. [EMAIL PROTECTED] wrote:

I'm trying to read all file names from a directory into an array and then choose the
ones to process.  The List Directory vi implies that it will do this but
I'm not having much luck.  

The vi will produce a list of my available drives if I do not give it a path name. 

Well an empty path is considered to be the root of your file system which under
Windows is the parent of all volumes.

Passing the actual path to the directory you are interested to find the files
or other directories in will help a lot to get the information you want.

Investigating the error cluster of the function may also help to uncover things
like a bad path name. A sinmple type will already make the function fail as it
can't find the directory you told it to enumerate.

Also make sure you have actually read access to the directory you want to list.
Windows security can cause some pretty strange errors sometimes if you are not
aware of it.

Last but not least make sure you don't specify a pattern input if you are not
interested in a specific file ending or such.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H   fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: More serial comms [W]

2004-02-18 Thread Rolf Kalbermatter
Steven Hammer [EMAIL PROTECTED] wrote:

Thinking of Craig's earlier serial problem, I thought I might share a 
problem I had with using LV with a VISA serial communications setup on 
Win2K and XP.

If you set the serial port settings using VISA Configure Serial 
Port.vi to whatever baud rate, parity, flow control etc. that you want, 
sometimes you just get nonsense sent to whatever device you've got attached.

I found that you have to set the serial port settings in Control Panel 
- System - Device manager to match your VISA settings. Otherwise 
chaos ensues...

Do you use the VISA Clear or similar function by any changes? I think this
does reset the entire port under VISA for serial ports (actually the real
cause of a problem on this list lately) which will put it back into the
default (e.g. OS) settings.

Other than that I have never encountered setting those serial port
properties not working the way they should as long as the port I was
trying to configure was actually present in the system ;-)

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H   fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




RE: other persons software (was: why I hate express vis.....)

2004-02-14 Thread Rolf Kalbermatter
Scott Hannahs [EMAIL PROTECTED] wrote:

Opening diagram extends over about 7 screens horizontally and 6 screens vertically
(1600X1200 pixel screens).  Ok, bad enough, now open the hierarchy window and I
see a main VI and about 200 express nodes, that I can't open without converting
to sub-VIs.  There were 4 actual real non-system subVIs in the whole thing with
main VI using about 12 Mb of disk space.

Unfortunately the programmer had hardwired the path separation character so it
failed on all platforms except the development one.  The character was in 26
different places and each place was wired into somewhere between 1 and a dozen
string building functions.

Sounds almost like another system I had the joy to upgrade to the newest
LabVIEW verson lately. The original programmer was a seasoned C programmer
with little to no knowlegde about LabVIEW. Lots of globals and locals all
around, four cascaded state machines all in the main VI and implemented as
a case structure with around 100 states each, sequentialized in a sequence
structure even though only one state machine could be active at the same
time as they were all dependant on the same state variable which of course
was a local too, as shift registers are just to darn sophisticated.
The main VI was around 10MB and all the subVIs (maybe around 100) together
were less than that. Opening the diagram on the original computer, an
industrial computer of some two years ago took 20 or more seconds before
it would show up! I was tempted to throw it all away but as the state
machines itself represented the entiere process intelligence of the
system with little or no documentation besides of the LabVIEW diagram
I reconstructed the system from there with each sub state machine in
its own subVI and removing as many globals and locals as possible.

After some four weeks of programming it is now in the final testing fase
and seems to work as originally intended with a few quirks removed and
optimizations in place.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H   fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




RE: low level device drivers

2004-02-14 Thread Rolf Kalbermatter
Howard, John [EMAIL PROTECTED] wrote:

I am considering using a PCI card (Sabtech NTDS interface card for those
interested), and it doesn't seem to have a nice simple DLL like I am use to. 
Instead it has a kernel mode Windows 2000 device driver, and I am expected to
use ...normal Windows I/O system calls... to talk to it.  The driver is
written to be part of the Serial Port driver class, BUT it can not be treated
simply as a serial port since ...there are several restrictions on the I/O
calls..., and ...it has its own set of DeviceIoControl functions.

I know so little about Windows device drivers that I don't really even
understand the paragraph I just wrote.  Does this mean I can't talk to the
card directly from LabVIEW?  If I can, is it difficult for someone who uses
DLL's fairly often?  Can I use the Win32 API to talk to it?  Does anyone have
any suggestions on how to approach this?

Well, you load a device driver by calling CreateFile() with the device driver
name (usually something like .\\MyDevice). Newer device drivers do not use
a hardcoded device name but a class GUID which you have to translate into the
appropriate device name through the setupapi functions before loading the driver.

After that you basically call DeviceIOControl() and eventually ReadFile() and 
WriteFile() on the handle returned from CreateFile(). At the end you close the
handle with the CloseHandle() function. It is not impossible to call all this 
functions directly from LabVIEW with the Call Library Node but especially the
DeviceIOControl() gets nasty very fast.

So it is possible but unless you only have to implement one or two simple
DeviceIOControl() calls it would be probably better to implement the interface
in a DLL with a more LabVIEW friendly call interface.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H   fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: Associate a type of file with a LV7 app

2004-02-13 Thread Rolf Kalbermatter
Dany Allard [EMAIL PROTECTED] wrote:
 
Is it possible to associate a type of file, like *.txt with Notepad.exe,
with a LV7 app.
I know that LV7 have a new feature that permit to retrieve the command line
used to call the app.
But I don’t think that would be useful for what I want to do. 
I want to Double-Click on my file and my LV7 app start automatically and the
file is read and displayed.
 
You will need the command line too! But first you need to go into the
registry and create the according keys to associate your file ending with
your application. Looking in the registry how things are done for instance
for .vi files should give you enough of an idea.

In the ROOT key you have a key .vi with its value set to some sort of name in
this case LabVIEWInstrument. Under the same root group you get another
group with that LabVIEWInstrument name and some groups and keys in there.
The important one is shell/open/command whose value is the path to your
executable and then %1 as parameter to tell the Windows shell to insert
the file name the user has clicked on at this place.

Then read the command line in your application and do what you need to do
with it.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H   fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: Time Zone

2004-02-05 Thread Rolf Kalbermatter
Daniel A. Gilmore [EMAIL PROTECTED]

I need to save absolute time stamps to an Excel compatible CSV file and need
the time zone adjustment from Universal Time to do this since the LV time
stamp is in Universal Time.  Is there a way in LV to get this info.

Only LabVIEW's internal int32/timestamp seconds since 1904 is in UTC. Once
you format that timestamp into text format, either in a front panel control
or using the diagram prmitives for this and even the seconds to date/time
cluster functions, you get the local time according to the timezone configured
in your system.

Unless you live on the Greenwich meridian, you will see that 0 seconds will
not display as 00:00:00 1 Jan 1904 in a numeric display formated to show
Date  Time format, which shows you your timezone offset.

If you need the seconds since value in local time, you need to calculate the
timezone offset and subtract it from the LabVIEW seconds. There are VIs to do
that on NIs site somewhere. Try to look for timezone calculation or GMT or
something like that.

The idea is to convert a known seconds since such as 86400 into a date time
cluster, then taking days, hours, minutes and seconds calculate its local
seconds since and subtract 86400 from that and you end up with the timezone
offset.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H   fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
9




Re: Time Zone

2004-02-05 Thread Rolf Kalbermatter
Jean-Pierre Drolet [EMAIL PROTECTED]

 Daniel A. Gilmore [EMAIL PROTECTED]

 ...and if I do not have the internet toolkit?

Get the timestamp for LabVIEW epoch (0 = Jan 1st,1903 GMT) and get the hour. The
difference from midnight is your Time Zone offset.

This works not flawlessly as you have 12 hour positive and negative timezone
offsets. Also note that there are timezones which do not have a full hour offset
but half hours and possibly even more obscure ones.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H   fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




RE: [W] VI Server vs. System Exec.

2004-02-05 Thread Rolf Kalbermatter
Subject: RE: [W] VI Server vs. System Exec.
From: Junginger, Johann [EMAIL PROTECTED]
Date: Wed, 4 Feb 2004 18:15:34 -0500 


As I understand it a version of LabVIEW can always(?) open a VI written with
an earlier LV version but the converse is not true. 

That is only true for the development system. I think she said something about
built VIs which I would assume would mean VIs built into an executable. In that
case a runtime system can only open VIs saved in the same major and minor version
as it lacks the compiler support needed to recompile a lower version VI.

In that case if they are built as executables and not just built LLBs it would be
indeed possible to launch those executables by the system exec VI which will launch
a new independant process which will automatically load the according runtime system
needed for itself.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H   fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




RE: IMAQ references/LV7

2004-02-05 Thread Rolf Kalbermatter
Jack Hamilton [EMAIL PROTECTED]

Thanks for the IMAQ examples. What I am finding is that if you drag the
property node into another blank VI then drag the reference to the blank VI
front panel – problems occur:
1. The reference does not show as an IMAQ reference but a odd looking thing
with a red x “PlugInContol Refnum” when I try to connect this reference to
the property node in the blank VI – I get an ‘insane” error and LV7 crashes.
2. I can’t seem to find the ‘proper’ IMAQ reference.

I really what to do the coding at a sub-VI level not on the top level – this
is where the problems lie.

The IMAQ control is not really a full LabVIEW control at least on my system.
It has only four generic properties and no method at all to work with. So I
guess the real problem is that it is in fact a control added to LabVIEW by
the IMAQ library with LabVIEW only providing a minimal support itself. To
manipulate properties and data of the IMAQ control you really need the
IMAQ library with its VIs which implements all the important operations in
an external DLL with a little low-level hand-holding by private exported
LabVIEW C functions.

You can see a little of the low level magic in the IMAQ Grab and associated
VIs which access the NI-IMAQ hardware drivers to put acquired images into
an IMAQ control.

Forget about the property node for an IMAQ control for the time being. Maybe
a future LabVIEW version will provide an IMAQ control which is more like a
normal LabVIEW control but for now it really is more like a native container
You really need to use the VIs from the IMAQ library itself and/or the NI-IMAQ
acquisition functions.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H   fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: Subpanels in 7.0

2004-01-31 Thread Rolf Kalbermatter
Jim West [EMAIL PROTECTED] wrote:
 
I am disappointed that in order to use the subpanels; I have to have the
caller do the work.  It would be so much easier if the VI could just use
a subpanel control reference to insert themselves in a subpanel.

Well you usually have one caller and many subpanels. Implementing it
once in the caller instead of 50 times in each subpanel seems like a
lot of time saving to me. It also seems more logical to me to do it
in the caller but that may be just my warped mind.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H   fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: [A] Call by Ref Open VI Ref Question

2004-01-29 Thread Rolf Kalbermatter
PJ M [EMAIL PROTECTED] wrote:
 
I have a main VI that call a SubVI containing a call by reference.
The call by ref pass some parameters to the target VI.
One of the parameter is from a control on the main VI (UI). As long as I
dont hold the control (a slider), everything is fine. If I do hold the
control, the call by reference does not execute until I release the
mouse (to be more specific the Open VI reference does not execute).

Open VI Reference needs to be protected and for that reason always
executes in the UI thread. The UI thread also handles all User Interaction.
Not much you really can do other than openening the VI reference once in
the beginning and store it in a shift register or something like that.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H   fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: Interupt

2004-01-29 Thread Rolf Kalbermatter
Joe [EMAIL PROTECTED] wrote:

 My aim is to if LV can handle the lowlevel intrupt in
the whay that C++ or C can. for example to disable the
keybord intrupt or floppy drive. or assign the intrupt
handler of one pripheral to the other.

Not really, or better you could always write a C DLL and call
it from LabVIEW. But the real problem is that an app in modern
OSes is not only not supposed to do such things it is almost
impossible to do it without device driver support.
If there is a WinAPI you can call in a C program to do what
you want however you can always use the Call Library Node to
call that API.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H   fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: swapping scales incompatible with EnableAutoScales=False ?

2004-01-22 Thread Rolf Kalbermatter
Xavier Michalet [EMAIL PROTECTED] wrote:

I am facing an annoying fact (LV7, W2000)... Recently I learned about 
the EnableAutoScales=False option in the Labview ini file, that helps 
preventing the graph display area to expand and shrink as the scale 
display changes. However, it seems that it comes with a price, namely 
that you can duplicate scales, but they won't show up as two scales. 
Only one seems visible, and there is therefore no way to swap scales 
from one side to another. I resorted to the annoying sequence of action: 
1) Close Labview 2) Edit the ini file to EnableAutoScales=True 3) 
Restart LV and add/swap scales  4) Close LV 5) Edit the ini file back to 
EnableAutoScales=False 6) Restart LV and proceed with my work.
Obviously I am missing an option accessible via the LV menu that would 
toggle this EnableAutoScales from True to False. Is there any such thing 
or is this only that graphs are still work in progress in LV (which we 
know from many other features anyway) ? Should I write my own Vi that 
would change the ini file back and forth ?

You don't need that global INI file setting anymore in LabVIEW 7.0 as the
setting is now a per graph configurable setting in the popup menu.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H   fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




RE: Array Wish list

2004-01-21 Thread Rolf Kalbermatter
On Mon, 19 Jan 2004 22:42:22 -0800 Jason Dunham wrote:

 As far as I've ever heard, the transpose arrays don't use any extra
 memory.  The transpose function is done in place. I'm sure a few extra
 bytes are needed for temporary storage, but supposedly the same array
 buffer is reused. I would guess that the graph transpose option is also
 not a memory hog.  

Although a transposed array will consume the same amount of memory as
the original array, the transpose function involves creating a new array,
populating it using data from the original array, then deleting the
original. This can have a significant effect on memory usage with a
large array.

I'm not really sure about if it really does, but it is envisable that
the transpose function may actually operate in place with just one single
temporary storage value by swapping the two elements from start and tail
of the according buffer part. It certainly has not done so in the beginning
but as LabVIEW evolves a lot of array manipulation functions have been
improved.

Tests show an additional buffer used indeed compared to when no Transpose
is used. Also using the unofficial INI file setting showInplaceMenuItem=True
you can verify that the Transpose function does not seem to operate in place.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H   fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: Another wish-list item - 2D array concatenation

2004-01-20 Thread Rolf Kalbermatter
Rubin, Gary [EMAIL PROTECTED] wrote:

While we're on the subject of wish lists, I'd like to propose a more flexible build
array function.  
My current frustration is caused by 2D array concatenation.  If I have two NI-DAQ
devices, each collecting 3 channels with 1000 samples, I'd like to be able to
concatenate the outputs of the AI Reads so that I have an array of 6 channels of
1000 samples each.  Instead, I get a 3 by 2000 array.  The workaround is to
transpose the outputs of each AI Read, concatenate, then transpose the resulting
array - not a very efficient approach as the number of samples increases.  
It would be nice if the build array function had the options Concatenate columns
and Concatenate rows.

How about using Insert Array with the number of scans wired to the second index
input? Simple and probably the most efficient method to do.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H   fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: State machine diagram editor

2004-01-16 Thread Rolf Kalbermatter
Howard, John [EMAIL PROTECTED] write:

It lets you create state machine based LabVIEW programs by 'simply' drawing a
state diagram.  You can switch between diagram view and code view.  It
probably mostly useful for documentation, where state diagrams are needed.

It is not part of any version of LabVIEW, you have to pay for it separately. 
The price is $995.  (the standard price for any LabVIEW tool from NI)

I am selfishly hoping that no one buys it so that they may change their mind
and include it LabVIEW Professional or something like that.  (For $995 I'll
keep using Visio to draw the few state diagrams I need)

Well we usually use SmartDraw www.smarttdraw.com instead. It's about 1/10 of
the price of Visio and does IMO just a good job at creating state transition
diagrams or whatever your favorite state machine documentation is.

I have looked at the State Machine Toolkit and believe that it is fine for the
standard state machines a lot of users usually encounter but my state machines
usually always tend to be just a tiny little bit more involved so that I would
have to hand edit the generated state machine afterwards anyhow and once modified
manually you can't seem to go back to continue with the State Diagram Editor. 

If it would be part of the Professional Developer version of LabVIEW I might
give it another try ;-)

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H   fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: DAQ OPC on Windows XP?

2004-01-15 Thread Rolf Kalbermatter
John Howard [EMAIL PROTECTED] wrote:

If I try to view the registry keys on the remote computer using 'regedt32', I can not
view anything below HKEY_LOCAL_MACHINE\SYSTEM, even though I have modified security 
for
the following keys to make sure DCOM should work.
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurePipeServers ...
I gave Everyone full access to these keys just to be sure security wouldn't be a 
problem.

I have no experience both with OPC DAQ as well as XP but here it goes:

Windows security is tricky at its best, maybe every security is like that. However
I do believe that the inability to browse the hive in regedit may have to do with
the fact that a key has rights to both view as well as enumerate (expand) it. So in
order to browse to your key you might need to allow on all keys from HKLM\SYSTEM down
to the one you want to get at the enumerate privilege for the user in question. There
are actually a whole bunch of different privileges one can request for when opening a
key. 

Maybe XP has changed somehow that it will in its RPC implementaiton just enumerate the
keys hierarchy level for hierarchy level until it gets at the one it needs and that
would fail when the enumerate privilege is not enabled. Or DAQ OPC or Windows RPC
request one of the many privileges to much when trying to open a key, failing on that
one although it may not be needed for the operation in question.

As Microsoft has become more concerned with security in the past year or two such small
but under certain circumstances far reaching modifications to core elements have become
more the rule than the exception.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H   fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




RE: State machines and passing references

2004-01-13 Thread Rolf Kalbermatter
Scott Serlin [EMAIL PROTECTED] wrote:

Let me try it.  I've been trying to right-click the case statement and
not the while loop.  This appears to be an oops on my part.

Cases don't have shift registers and so doesn't the event structure. You
only get tunnels and they are created automatically when you route a wire
through the border. If you need local storage in a state machine (classic
with cases or modern with the event structure or a combination of the two)
you need to create shift registers on the boundary of the loop you have to
put around the case or event structure anyhow.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H   fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




RE: ScrollBar

2004-01-13 Thread Rolf Kalbermatter
Subramanian, Shankar, K [EMAIL PROTECTED] wrote:

Thanks for your reply. I think I will stack them avoiding the scrollbar as I
would not need all the buttons to be visible. But just curious to know, is
there any way of automating scrollbar depending on the window?

You don't work with the scrollbar itself but rater change the
FrontPanel.Origin property of your VI reference.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H   fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: Bug in Unflatten from String (LV7)

2004-01-13 Thread Rolf Kalbermatter
[EMAIL PROTECTED] wrote:

when unflatting certain binary datas with Unflatten fromString (LV7) I
get an error msg window Nicht genügend Speicher zum Abschließen dieser
Operation (* not enough memory to close this operation). The error-output
of the vi is incorrect.
Does anybody know a workaround of that bug?

I have binary files with different versions of a certain datatype. When
loading a file, I try to unflatten the data using the newest dataversion.
If I get an error, I try the 2nd newest dataversion and so on until I found
the right datatype. This methode was ok up to LV6.01.
In LV7.0 the bug appears when I try to unflatten to the wrong datatype. My
datatype is an array of clusters of strings and numbers.

NI knows the bug but has no patch or workaround. NI-service told me, to
wait for LV7.1, maybe mid of 2004, --- very ´good´ service.
This bug is a fatal problem for me, I cannot read many of my old measuring
datas under LV7.0.
If I dont find an solution, the consequence will be, I throw out LV7 and go
back to LV6.01.

I can only second Stephens message. The fact that it seems to work in
LabVIEW 6 is pure incident for the datatypes you are using, as I know
that trying to unflatten random (corrupted) data will often show the
same symptoms in various versions of LabVIEW.

The only thing I can think of what they might be trying to do is adding
C++ exception handling to the internal implentation of the Unflatten
function. This is a major modification of the code and not a bug fix at
all, so expecting NI to release a 7.0.1 just for that is not really realistic.

The correct action to do is of course to include a version number indicating
both the your own and the LabVIEW version in front of your data so that you
can use the correct unflatten type in the first place.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H   fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: using Eseries manual cal library call

2004-01-10 Thread Rolf Kalbermatter
[EMAIL PROTECTED] wrote:

Hi, when using the Ni library call  I16 Get_Cal_date(I16 device, Char*
CalDate) we have a problem.  If we connect a string constant to the Char*
input of the lib call, we get the proper data out of the string out of the
library call, yet Labview then crashes.  If we leave char* unwired, labview
does not crash, but we get an I32 on the string out - is that the addy of
the string?  If so, how can we get the string data.  Or, are we doing this
wrong?

A C function is normally not supposed to allocate a buffer itself. That
would give a lot of troubles as the caller would need to know how to free
that buffer exactly to avoid crashes and memory leaks.

The function however expects a buffer allocated by the caller of at least
a certain size to fill in the information it wants. So wiring just an empty
string constant will pass a 0 lenght buffer to the function and when it then
tries to fill in the information you get a General Protection error.

You need to create a buffer long enough for the function. The function
documentation should contain the information about what size this has to
be.

Just create an U8 byte array with the needed size and convert it to a string.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H   fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]
 




Re: Windows Time since When?

2004-01-06 Thread Rolf Kalbermatter
There is a bit of a discussion between me and a C programmer about the Time
Stamps in windows.

Windows itself has a number of time epoches.

First there is a 16 bit Windows API:
1) From the old times there is a DOS time which is really two 16 bit numbers with
bit packed interpretation (and behold: 2 seconds resolution)

The Win32 API has following time concepts:
2) The SYSTEMTIME is really a structure similar to the ANSI C structure tm
thus represents absolut time.

3) The FILETIME is a 64 bit value representing the number of 100 ns intervals
since January 1, 1601

Last there is the C runtime library time:
4) msvcrt or any other C runtime library one might use in a program has the
ANSI C implementation of time(), which is usually a signed long int with the
number of seconds since January 1, 1970, (and therefore will have a new
millenium problem sometime around 2037). The Visual C runtime really is just
a layer on top of the Win32 API.

LabVIEWs Get Date/Time in seconds does return the number of seconds since
midnight January 1, 1904 UTC and does this on different platforms in different
ways.
On Mac it simply requests the Mac time, then adjusting the returned local time
by the timezone offset.

On Windows it requests the SYSTEMTIME by calling GetSystemTime() then converts
this into a FILETIME with SystemTimeToFileTime() and after that subtracts the
difference between January 1, 1601 and January 1, 1904 in 100 nanoseconds
intervals and at the some time adjusts for the difference in interval length
by dividing the result with 10'000'000.

We are working on code that passes data with timestamps between LabVIEW and
C. The problem is LabVIEW’s “Get Date and Time in Seconds” is ‘since Jan 1,
1904.

My dusty C programming book states time_c is seconds since Jan 1, 1980 and
Unix is since Jan 1, 1970? The C programmer says it’s Jan 1, 1970.

According to ANSI the epoche of January 1, 1970 really is arbitrary although
quite popular and any C runtime implementation might choose to use a different
epoche.

That said; is there a Windows function I can call to get the windows time in
seconds since whenever it actually is? I suspect the LabVIEW function is the
“Platform Independent’ referenced time.

Well, to go from LabVIEW time to ANSI C time you basically just add the number
of LabVIEW seconds you would get by the LabVIEW Get Date/Time in seconds
function for midnight of most probably January 1, 1970 UTC. At least the
Visual C runtime library does use this epoche. If your C programmer uses a
different C compiler he will have to check in there.

You do need to watch out for timezone offsets however when trying to work in
such a way. The ANSI C documentation I have isn't really specific if the return
value of time() is in UTC or local time. Visual C however seems to return UTC.
You can try to avoid this by actually using gmtime and mktime.

Rolf Kalbermatter
CIT Engineering Nederland BVtel: +31 (070) 415 9190
Treubstraat 7H   fax: +31 (070) 415 9191
2288 EG Rijswijkhttp://www.citengineering.com
Netherlands mailto:[EMAIL PROTECTED]