From: -= JB =- [EMAIL PROTECTED]
It's in your Rev directly if you are using it inside the IDE. :-)
My bad. It should read directory as in a folder. Open your Rev folder
(where Rev lives on your computer) and you'll find it there. You can open
the file using any text editor. The file
On Jun 1, 2007, at 2:07 AM, Scott Kane wrote:
It's in your Rev directly if you are using it inside the IDE. :-)
Scott
Hi Scott,
Thanks for the reply. If it is directly in Rev how do I look at
it, move it or remove it if I want to.
-=JB=-
On May 31, 2007, at 10:07 PM, -= JB =- wrote:
That is very nice.
Where exactly is the text file being saved right now?
thanks,
-=JB=-
Don't get me wrong. I know the handler to save the data
is in the script of the okay button and the handler script
is located in the
From: -= JB =- [EMAIL PROTECTED]
I am not able to find the actual text file after it's saved. Is
it saved as a file on my hard drive and I just can't find it or
is it somehow saved as a file inside the stack.
It's in your Rev directly if you are using it inside the IDE. :-)
Scott
I have read some about Valentina. They say it is fast
and can be used with
the Studio version of Rev. Oracle needs the higher
version of Rev.
I tried to read the license to learn about any royalties
I would need to pay
with Valentina but I really didn't find the
On Jun 1, 2007, at 3:44 AM, Scott Kane wrote:
From: -= JB =- [EMAIL PROTECTED]
It's in your Rev directly if you are using it inside the IDE. :-)
My bad. It should read directory as in a folder. Open your Rev
folder (where Rev lives on your computer) and you'll find it there.
You can
On Jun 1, 2007, at 7:56 AM, Lynn Fredricks wrote:
Unless you want to do something that violates the EULA, Valentina is
royalty
free. With the ADKs its pretty simple - you can develop and deploy
as many
as many apps as you like and ship as many units as you like.
With VDN, you can deploy
On Jun 1, 2007, at 12:07 AM, -= JB =- wrote:
That is very nice.
Where exactly is the text file being saved right now?
Thanks! It is saving the file in the same directory as the engine if
you use it in the Rev IDE. I use this program in a standalone form.
So it saves it in the
thought about to
access my datas. Perhaps a try next time.
Thanks
Tiemo
-Ursprüngliche Nachricht-
Von: [EMAIL PROTECTED] [mailto:use-revolution-
[EMAIL PROTECTED] Im Auftrag von J. Landman Gay
Gesendet: Mittwoch, 30. Mai 2007 21:19
An: How to use Revolution
Betreff: Re: All this talk
Well first of all, thanks yet again to Jacque for Find and Mark and Scroll
through - wonderful, thanks! Another of these things which are totally
obvious the minute you read them, and which you would never have found for
yourself.
Here is what I don't get though about Richard's approach.
I was wondering about the speed of using filemaker with a RunRev front end
in comparison to using RunRev with SQLite or Valentina.
I have been using SQLite and don't have very many records but do some
convoluted SQL that is fairly slow (many joins). I also use SQLitemanager in
conjunction with my
On 31/5/07 4:18 PM, Bill [EMAIL PROTECTED] wrote:
Hi Bill,
I was wondering about the speed of using filemaker with a RunRev front end
in comparison to using RunRev with SQLite or Valentina.
I have been using SQLite and don't have very many records but do some
convoluted SQL that is fairly
Peter Alcibiades wrote:
Here is what I don't get though about Richard's approach. Obviously it works,
but I can't see how to do it. Two cases, maybe will beg for help on the
second one later. Case one is a data set of about 3000 records, each with
about 40 fields. Some of the fields are a
On May 31, 2007, at 1:55 PM, J. Landman Gay wrote:
Or you could store it in a text file and just read that in. In any
case, it's all the same approach; store the data as a single text
variable. With this method, you use offset() or lineoffset() to find
the record(s) you want, and use a
On May 31, 2007, at 3:56 PM, -= JB =- wrote:
On May 31, 2007, at 1:55 PM, J. Landman Gay wrote:
Or you could store it in a text file and just read that in. In any
case, it's all the same approach; store the data as a single text
variable. With this method, you use offset() or lineoffset()
On May 31, 2007, at 3:55 PM, J. Landman Gay wrote:
Or you could store it in a text file and just read that in. In any
case, it's all the same approach; store the data as a single text
variable. With this method, you use offset() or lineoffset() to
find the record(s) you want, and use a
On May 31, 2007, at 9:06 PM, Mark Talluto wrote:
Here is an example of this method. I use this to customer data and
registration information. Once can easily modify it to match their
needs.
www.canelasoftware.com/pub/rev/Key_Maker.rev.zip I removed some
code that makes our key system
From: Richard Gaskin [EMAIL PROTECTED]
Richard,
Everything in computing involves tradeoffs. The question of HC's storage
vs. Rev's is about paging:
Indeed. Makes sense.
With unusual care it was possible to have an unusually low number of
corrupted stacks in HC, but I never met a
And think about it: since every Rev object has multiple property sets,
and a stack can have any number of cards, and cards can have groups,
etc. -- all this means you can have richly hierarchically-ordered data
sets using just custom properties. Hierarchies reflect much of the
world's
Now I have to get into this very interesting thread as a rev novice too.
Storing data is one thing and the possibilities of rev are really perfect.
But how do you get a good performance accesing the however stored datas in a
stack (cards / properties / properties loaded into arrays)?
Saying I
From: Peter Alcibiades [EMAIL PROTECTED]
Guys, if this is such a great feature of Revolution - and I believe it,
the description is suggestive, promising, interesting - possibly someone
with
the good of the platform at heart should consider writing a tutorial and
example on it?
Dan
On 5/30/07, Scott Kane [EMAIL PROTECTED] wrote:
I've read about this from time to time. As I know you'll be aware we have
this problem on Windows no matter what we do to the file or the file
type -but it's become a heck of a lot better since Windows 2000 and up.
Hi Scott,
Are you saying you
From: Chipp Walters [EMAIL PROTECTED]
G'day Chipp,
Are you saying you have Rev stack corruption problems on Windows?
One person had problems on Win 98 with one of my stacks. Repeatedly. I
took the same application to another machine (running Win 2K) and there was
not a single error.
But for larger projects, and especially projects where you
need to be able to provide support to many users, it's
imperative to separate the business logic from the data. You
can still do this using RunRev stacks, just have the stacks
hold only the data, and you move it from card to main
From: Lynn Fredricks [EMAIL PROTECTED]
I strongly agree with Chipp - of course I have some interest in databases
with Valentina. An application development environment really has to be
able
to do everything, yet it cannot be good at everything. By storing data in
a
database, you can leverage
On 30 May 2007, at 04:57, Joe Lewis Wilkins wrote:
Jesse,
In case you don't know. HyperCard was written by a genius in
assembly language. Here I'm going to make an assumption (with all
of the known dangers of doing so), Rev was written by a good
programmer; probably in a high or higher
Hey Dave,
My first comment when I started this thread was another great
controversy (smile). No question but what HC had its limitations.
Without CompileIt I would have been real discouraged back then; but
as the machines got faster, particularly with my externals written in
native
Scott, Joe, et al:
I believe Rob Cozens does something of the sought [partially load a
db stack] with Serendipity, but the question is whether it's really
worth while given it's all there already with a real database.
I've followed this thread this far wondering should I bother to
mention
Richard, et al:
Why bother with the overhead of storing the data in fields on cards,
when you can easily parse item and line chunks of a single block of
data so very efficiently?
This is basically how SDB handles non-binary data. The field
delimiter character for each record type is
Scott Kane wrote:
- Original Message -
Mac OS X. It was very fast loading the text file. But each record was
fairly small, so 40,000 of them wasn't as large as one might think. It
was measured in megs rather than gigs, but I don't remember exactly
how big it was.
The original data
On the other hand (and I'm not actually advocating it, since I've
never tried it), it would also be possible to build indexes of the
data that are kept in memory, while keeping the actual data in a
collection of many stack files which are then loaded and unloaded as
required.
As I say,
I'd go for SQLite or Valentina.
trying to make sense out of 1 million records, you need a good query
language, one that is able to do more than one operation with a
single query. Just imagine looping your indexes over and over again
trying to find the cross-references you're looking for.
Tiemo Hollmann TB wrote:
Now I have to get into this very interesting thread as a rev novice too.
Storing data is one thing and the possibilities of rev are really perfect.
But how do you get a good performance accesing the however stored datas in a
stack (cards / properties / properties loaded
looks around Am I the only one to have just heard of SDB?
More infos? Links?
Cheers,
Luis.
On 30 May 2007, at 17:32, Rob Cozens wrote:
Scott, Joe, et al:
I believe Rob Cozens does something of the sought [partially load
a db stack] with Serendipity, but the question is whether it's
From: Rob Cozens [EMAIL PROTECTED]
Hi Rob,
I've followed this thread this far wondering should I bother to mention
SDB, considering the underwhelming response it has received from the
RunRun community?
No disrespect meant by my reference to Serendipity.
Scott Andre might offer some
From: J. Landman Gay [EMAIL PROTECTED]
I think that pretty much requires an external database. If you dump that
much data into a stack I don't think you'll like the results.
Indeed and I would never dream, normally, of anything less. But I was
curious to see how far I could push the
From: Andre Garzia [EMAIL PROTECTED]
I'd go for SQLite or Valentina.
trying to make sense out of 1 million records, you need a good query
language, one that is able to do more than one operation with a single
query. Just imagine looping your indexes over and over again trying to
find the
From: J. Landman Gay [EMAIL PROTECTED]
G'day Jacqueline,
With that much data, a SQL database is better. For smaller databases, you
can create one card per record and use the mark command to flag the
cards that match. For example:
snip helpfule pseudo code
Then you loop through the marked
I have never used Valentina or the other database
mentioned but here is a question if I decide to use
one I make in Revolution.
One user said he was working with one million or
more files so lets use that as an example. If I were
to make a text file with one million card ids in item
one and the
From: -= JB =- [EMAIL PROTECTED]
One user said he was working with one million or more files so lets use
that as an example. If I were to make a text file with one million card
ids in item
one and the user name in item two and then put it into a variable and
perform a search for the id and
Rob-
Wednesday, May 30, 2007, 9:32:51 AM, you wrote:
I've followed this thread this far wondering should I bother to
mention SDB, considering the underwhelming response it has received
from the RunRun community?
Interesting. From my end of things, I think the underwhelming
response is
Hi all,
Guess I'm going to start up another great controversy. Again I'm
hearkening back to my HC days. When it was first released, one of its
main claims to fame was the question as to whether or not it WAS a
database. Certainly, it had all of the attributes and features of
one. Even
Joe,
check http://www.himalayanacademy.com/resources/lexicon it is a
lexicon for those interested in Hinduism. It not only performs
searches but it has cross-references between words. For example
search for karma or vedas, if you want to see a quick query just
search for a this will give
From: Joe Lewis Wilkins [EMAIL PROTECTED]
Guess I'm going to start up another great controversy.
I shouldn't think. This is a very interesting discussion AFA I am concerned
at any rate.
Again I'm hearkening back to my HC days. When it was first released, one
of its main claims to fame
From: Andre Garzia [EMAIL PROTECTED]
check http://www.himalayanacademy.com/resources/lexicon it is a
Very impressive!
Scott Kane
CD Too - Voice Overs Artist Original Game and Royalty Free Multi-Media
Music
Any sufficiently advanced technology is indistinguishable from magic.
Arthur C
Andre,
Thanks, you've confirmed my suspicions. So most of us don't really
need to be concerned with other DBs.
Joe Wilkins
On May 29, 2007, at 7:50 PM, Andre Garzia wrote:
Joe,
check http://www.himalayanacademy.com/resources/lexicon it is a
lexicon for those interested in Hinduism. It
Guess I'm going to start up another great controversy. Again I'm
hearkening back to my HC days. When it was first released, one of its
main claims to fame was the question as to whether or not it WAS a
database. Certainly, it had all of the attributes and features of
one. Even with SE30s as a
Scott,
Thanks, many people were involved in bringing this resource to life,
I belive we all smile when someone likes it! Don't we all love when
it works! :-)
remember for each query you do, Apache has to start the revolution
engine, load the stack, load the database stack, perform the
Curiously, Sarah,
That was a method I used to archive HC records, rather than just
saving the entire stack with all of it's overhead. That would be a
good idea for archiving Rev records as well. In the HC days I did it
because I was archiving data on 3.5 400k or 800k floppy disks. It
was
For small databases, (like the simple Address Book sample), the
embedded database/biz logic works pretty well.
But for larger projects, and especially projects where you need to be
able to provide support to many users, it's imperative to separate the
business logic from the data. You can still
From: Chipp Walters [EMAIL PROTECTED]
One huge advantage of using an external databsase like SQLite is the
ability to store data outside of RAM. HyperCard used to use a non-RAM
based design, but Rev stores everything in RAM. So, if you have an address
book which has 100,000 records in a single
From: Andre Garzia [EMAIL PROTECTED]
Hi Andre,
Thanks, many people were involved in bringing this resource to life, I
belive we all smile when someone likes it! Don't we all love when it
works! :-)
Indeed! I love finding stuff out about other cultures, too.
remember for each query you
Chipp,
don't you fear stored procs? (does SQLite has stored procs?)
Yes, the RAM issue is very important. When we built the lexicon, we
started with a 9 megabytes stack (due to a mistake with a background
group by yours trully), we wondered about RAM issues. We optmized the
stack and now
One huge advantage of using an external databsase like SQLite is the
ability to store data outside of RAM. HyperCard used to use a non-RAM
based design, but Rev stores everything in RAM. So, if you have an
address book which has 100,000 records in a single stack, then all the
records would need
Jesse,
In case you don't know. HyperCard was written by a genius in assembly
language. Here I'm going to make an assumption (with all of the known
dangers of doing so), Rev was written by a good programmer;
probably in a high or higher level language. Big difference. Then Rev
has to do
Scott Kane wrote:
I'd be curious to know why RR decided to
change the behaviour of how stacks are read (from file as opposed to
loaded fully into RAM).
They didn't; the engine has always worked that way since its original
MetaCard incarnation. Scott Raney, the creator, wanted speed and so
On our lexicon stack we have each card name being the word it stores,
we use a combination of filters and RegEx to search the data, this
way, I don't need to loop the cards, I just need to iterate over the
cardnames, thats why it is fast. I think if you create smart indexes
and use clever
From: J. Landman Gay [EMAIL PROTECTED]
They didn't; the engine has always worked that way since its original
MetaCard incarnation. Scott Raney, the creator, wanted speed and so wrote
the engine to load everything into RAM. The trade-off is that you need as
much RAM as the size of your stack
Probably Richard, Jacque or Ken will jump in here and correct me if
I'm wrong. But, as I recall, the primary reason for writing MetaCard
as a total RAM based product was it made it lightning fast. And to
this day, it is still a very fast programming environment. In many
ways, significantly faster
Andre Garzia wrote:
On our lexicon stack we have each card name being the word it stores, we
use a combination of filters and RegEx to search the data, this way, I
don't need to loop the cards, I just need to iterate over the cardnames,
thats why it is fast.
I wonder if you indexed the card
Scott Kane wrote:
From: J. Landman Gay [EMAIL PROTECTED]
I wrote a database with over 40,000 records, and for that one I loaded
a text file into RAM and then used a 1-card display stack to show the
desired record. This method requires that you write all your own
navigation and search
Recently, Chipp Walters wrote:
I have as much respect...if not more, for Scott Raney's efforts taking
the best parts of HC, speeding them up significantly and architecting
a solution for multiple platforms. You only need be around during the
MC days to recall what a perfectionist and stickler
- Original Message -
From: Chipp Walters [EMAIL PROTECTED]
Probably Richard, Jacque or Ken will jump in here and correct me if I'm
wrong. But, as I recall, the primary reason for writing MetaCard
as a total RAM based product was it made it lightning fast. And to this
day, it is still
- Original Message -
Mac OS X. It was very fast loading the text file. But each record was
fairly small, so 40,000 of them wasn't as large as one might think. It was
measured in megs rather than gigs, but I don't remember exactly how big it
was.
The original data I'm looking at
Scott Kane wrote:
I'd be curious to know why RR decided to change the behaviour of how
stacks are read (from file as opposed to loaded fully into RAM).
Everything in computing involves tradeoffs. The question of HC's
storage vs. Rev's is about paging:
HC is constantly picking up and
65 matches
Mail list logo