Re: Reading and writing large arrays to disk

2020-01-06 Thread Mitchell Shiller via 4D_Tech
So I tested my setup. V17R5, MacOS Mojave.
My results were slightly different. Your mileage may vary.

Compared BLOB, JSON Stringify Array, and OB SET ARRAY / JSON Stringify.

For writing an array that exists in memory, 
BLOB was fastest with average times in  6-9 milliseconds. 
JSA was in the 30-34 milliseconds.
OB / JS in the 36-40 milliseconds.

Compressing the BLOB prior to writing slowed the process with times of about 
20-21 milliseconds.
However, the file size was 1/3 as big. Shrinking from 2.3MB to 0.74 MB.
Note that the files written with JSA  and OB/ JS were 1.1- 1.3 MB in size.

Reading showed similar results .
BLOB 10-16 milliseconds.
JSA 36-43 milliseconds.
OB / JS 38-47 milliseconds.

Reading a compressed BLOB took 16-18 milliseconds.

In the end, any of these methods is fast enough for my needs. The lesson being, 
that while a text loop looks simple and intuitive, it incurs huge overhead and 
cannot be used.

I guess your choice would depend on your need / preference for smaller file 
size, readability etc.

Thanks for everyone’s input.

Mitch

>> 
>> Writing
> I believe disk access is now heavily cached by both disk controller, system 
> and (probably) 4D, with asynchronous access. I would be surprised if this is 
> bottleneck, but it need to be tested.
> 
> So I tested various approaches, code is down. I skipped first option - it is 
> painstakingly slow - and added another possibility - similar to option 3, but 
> instead of OB SET ARRAY / JSON Stringify, I used  JSON Stringify Array. This 
> turned up to be the fastest method.
> 
> Here are results as a screenshot: sorry, too lazy to write it down.
> 
> 
> 
> 
> My conclusion: direct writing to disk is pretty fast, and grows linearly with 
> size of the array and does not depend weakly on size of elements, so it seems 
> to be a good choice for large arrays and solutions that need to be scalable.
> 
> VARIABLE TO BLOB is too slow. Did not tried compression, but that would not 
> improve performance, IMHO.
> 
> JSON Stringify Array is the fastest solution, it seems to grow slightly 
> faster, depending on overall size of data so may be slower for large arrays / 
> element length, but I am surprised how well are these libraries optimised. As 
> the code is really simple, I would go with this.
> 
**
4D Internet Users Group (4D iNUG)
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: Reading and writing large arrays to disk

2020-01-06 Thread Chip Scheide via 4D_Tech
Variable to blob
blob to document

done fast as possible from 4D
Chip


> On Jan 6, 2020, at 10:50 AM, Kirk Brooks via 4D_Tech 
> <4d_tech@lists.4d.com> wrote:
> 
>> I agree with Chuck here - writing a line at a time is slow. It's very
>> secure though. So it's good if you may crash - whatever has already been
>> written stays written to disk. But otherwise better to buffer some and then
>> write.
> 
> I think that depends on whether it’s better to have partial data in 
> the file than correct data. If writing partial data will cause the 
> consumer of that text file to generate incorrect results or crash, 
> then it’s better to make sure it’s all or nothing, typically using an 
> atomic write mechanism like writing to a temp file, and when that’s 
> completed successfully, moving the temp file to the destination path. 
> 
> If performance is important, concatenating text 200K times is almost 
> definitely the wrong way, and the best way depends on what version of 
> 4D you’re using, the total size of the text, how fast is fast enough, 
> and possibly whether you’re writing to SSD or spinning hard drive.
> 
> Jim Crate
> 
> **
> 4D Internet Users Group (4D iNUG)
> Archive:  http://lists.4d.com/archives.html
> Options: https://lists.4d.com/mailman/options/4d_tech
> Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
> **

Hell is other people 
 Jean-Paul Sartre
**
4D Internet Users Group (4D iNUG)
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: gitignore for projects

2020-01-06 Thread John DeSoi via 4D_Tech


> On Jan 6, 2020, at 12:13 PM, Tom Benedict  wrote:
> 
> Just to clarify, you’re talking about the “Comments” page in the Explorer, 
> not inline comments in code, right? It’s been a few months, but when I 
> exported the structure in 17R4 inline comments were preserved.

Yes, comments in the source code are preserved. Explorer comments page does not 
work in project mode, also the commands METHOD GET COMMENTS and METHOD SET 
COMMENTS do not work.


John DeSoi, Ph.D.

**
4D Internet Users Group (4D iNUG)
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: gitignore for projects

2020-01-06 Thread Tom Benedict via 4D_Tech
On Jan 6, 2020, at 07:54, John DeSoi via 4D_Tech <4d_tech@lists.4d.com> wrote:
> 
>> @John - method comments are gone - does that mean that they aren't even
>> exported when the project is created?
> 
> Completely gone and not exported in project mode as far as I can tell.
> 
Just to clarify, you’re talking about the “Comments” page in the Explorer, not 
inline comments in code, right? It’s been a few months, but when I exported the 
structure in 17R4 inline comments were preserved.

Tom Benedict
**
4D Internet Users Group (4D iNUG)
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: Reading and writing large arrays to disk

2020-01-06 Thread Jim Crate via 4D_Tech
On Jan 6, 2020, at 10:50 AM, Kirk Brooks via 4D_Tech <4d_tech@lists.4d.com> 
wrote:

> I agree with Chuck here - writing a line at a time is slow. It's very
> secure though. So it's good if you may crash - whatever has already been
> written stays written to disk. But otherwise better to buffer some and then
> write.

I think that depends on whether it’s better to have partial data in the file 
than correct data. If writing partial data will cause the consumer of that text 
file to generate incorrect results or crash, then it’s better to make sure it’s 
all or nothing, typically using an atomic write mechanism like writing to a 
temp file, and when that’s completed successfully, moving the temp file to the 
destination path. 

If performance is important, concatenating text 200K times is almost definitely 
the wrong way, and the best way depends on what version of 4D you’re using, the 
total size of the text, how fast is fast enough, and possibly whether you’re 
writing to SSD or spinning hard drive.

Jim Crate

**
4D Internet Users Group (4D iNUG)
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: gitignore for projects

2020-01-06 Thread John DeSoi via 4D_Tech


> On Jan 6, 2020, at 9:38 AM, Kirk Brooks via 4D_Tech <4d_tech@lists.4d.com> 
> wrote:
> 
>  So inside of this folder I exclude the Preferences,
> userPreferences and Logs. Here's a recent one:


That's another 4D 18 upgrade issue. 4D 18 renames Preferences to Settings and 
rewrites various settings files. If you want to develop with 17 and 18 in the 
same database folder, create a symlink between Settings and Preferences.

John DeSoi, Ph.D.

**
4D Internet Users Group (4D iNUG)
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: gitignore for projects

2020-01-06 Thread John DeSoi via 4D_Tech


> On Jan 6, 2020, at 8:48 AM, Mike Kerner via 4D_Tech <4d_tech@lists.4d.com> 
> wrote:
> 
> @Tom - I get an error for 3d buttons, too "3D button style is not
> supported."  I was thinking of writing a quick tool to fix all the button
> types, since parsing and editing the JSON should be straightforward, but
> what I might do instead is farm this out to the people who are patiently
> waiting for the RFQ on this project

Right after the export from binary mode to project format is a good place to 
implement automated fixes for testing in 4D 18. For example, I wrote a script 
to rename all the relations in catalog.4DCatalog to make it more usable for 
ORDA in 4D 18.


> @John - method comments are gone - does that mean that they aren't even
> exported when the project is created?

Completely gone and not exported in project mode as far as I can tell.

John DeSoi, Ph.D.

**
4D Internet Users Group (4D iNUG)
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: Reading and writing large arrays to disk

2020-01-06 Thread Kirk Brooks via 4D_Tech
Peter,
I agree with Chuck here - writing a line at a time is slow. It's very
secure though. So it's good if you may crash - whatever has already been
written stays written to disk. But otherwise better to buffer some and then
write.

After looking at the link Arnaud posted it looks like concatenating about 1
- 2k worth of data is optimal.

-- 
Kirk Brooks
San Francisco, CA
===

What can be said, can be said clearly,
and what you can’t say, you should shut up about

*Wittgenstein and the Computer *
**
4D Internet Users Group (4D iNUG)
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: gitignore for projects

2020-01-06 Thread Kirk Brooks via 4D_Tech
Mike,
To your original question about which files to ignore I drop all the MacOS
specific files (obviously).

The RESOURCES folder is placed outside the Project folder. Data, Components
and Plugins are too. So the actual folder I make the git repo is the
database folder. Project folder is really the alternative to the
binary files. So inside of this folder I exclude the Preferences,
userPreferences and Logs. Here's a recent one:

# Untracked files and directories:
dataFile/
Logs/

Preferences/

userPreferences.*/

# OS generated files #
##
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db



On Mon, Jan 6, 2020 at 6:49 AM Mike Kerner via 4D_Tech <4d_tech@lists.4d.com>
wrote:

>  method comments are gone - does that mean that they aren't even
> exported when the project is created?
>
It's that way now. I haven't heard anything concrete regarding what the
future plans are.
I was starting to get agitated about this but when i thought about it the
'method' is a text file now. Where are these comments going to go? Either
they are added as a field to the JSON in the file header (not preferable to
me) or they are maintained in some sort of separate structure. Like another
directory of text files. Nice - but more complexity.

So, what are Comments good for? Obviously comments in code are crucial but
these are a) more decorative and b) require a separate, specific action to
access. The most useful thing for me is that they are the help tips for
methods in components. You can't search on them, you can't easily update or
export them en masse. I don't think their value is worth the effort to
replicate in Project mode. I do think there is a need for improving the
documentation in a Project, but the old Comments aren't it.

Additionally, recent versions of v18 have a tendency to loose the block
comments. It seems like they simply aren't shown, not actually removed. I
reverted to an earlier version of v18 (244065) until they fix it.

https://forums.4d.com/Post/EN/33167019/1/33167020

Product name 4D - 4D Server [  Previous topic  |  Next topic  ]
Build 246179
Platform MacOSX
Full Name Kirk BROOKS
Bug number ACI0100382 (In Progress)
Severity UI/Usability
Bitness 64-bit
Submited 12/29/2019 - 16:32
CONTEXT:
Systems or/and 4D versions or/and hardware where the bug happens:
- 4D single user
-
Systems or/and 4D versions or/and hardware where the bug doesn't happen:
-
-

SUMMARY:
This build, and some recent previous ones, do not display block comments.
This applies to block comments created in a previous version, 244065 for
instance, as well as block comments entered in 246179.
This seems to be a display problem. The space the comments occupy is shown
but it's blank in 246179. The comments are there, however, Opening the same
database in 244065 and looking at the same method I see them.

STEPS TO REPRODUCE THE BUG:
1. open a method
2. create a block comment and type something. Comments are visible until
the method is closed.
3. close and re-open the method

ACTUAL RESULT: comments are not visible

EXPECTED RESULT: comments are visible


-- 
Kirk Brooks
San Francisco, CA
===

What can be said, can be said clearly,
and what you can’t say, you should shut up about

*Wittgenstein and the Computer *
**
4D Internet Users Group (4D iNUG)
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: gitignore for projects

2020-01-06 Thread Mike Kerner via 4D_Tech
oh gawd, so much has happened on this thread in a couple of days...
@Tom - I get an error for 3d buttons, too "3D button style is not
supported."  I was thinking of writing a quick tool to fix all the button
types, since parsing and editing the JSON should be straightforward, but
what I might do instead is farm this out to the people who are patiently
waiting for the RFQ on this project
@John - method comments are gone - does that mean that they aren't even
exported when the project is created?

All of the Git GUI tools have their thing, and there are definitely times
when the CLI is the way to go, but that is becoming less and less
frequent.  I would say we're probably doing 95% or so of our git work in
GitKraken, and 5% in the CLI.  GK has been developing really rapidly,
including their Glo functionality, which gives you Kanban-style card boards
for tracking issues and priorities in your project, and it sync's with
Github's Issues if you want to do that.  We've been using GitKraken for a
couple of years (although on the ipads, for instance, we use Working Copy
b/c it's the only real git tool available on an ipad).  I personally don't
like Github desktop.

Did I mention that I'm really happy that 4D has chosen to go this direction?

On Mon, Jan 6, 2020 at 8:55 AM John DeSoi via 4D_Tech <4d_tech@lists.4d.com>
wrote:

> Hi Jeremy,
>
> One I used to use (GitX, I think) would sometimes take forever or spin the
> beach ball of death for large commits. I'm now using Fork (which I really
> like). Sometimes I get various commit or other operation errors. No
> problems from the command line.
>
> Here is another issue to watch out for with 4D/Git. In 4D you can globally
> rename a method changing only the case and not the spelling. Even if 4D
> changes the method name on disk, git remembers the case of first commit. It
> is not seen as a change because by default on Mac and Windows file names
> are not case sensitive. So if you switch branches, git checks out the file
> name using whatever case you used initially. This cases 4D to tokenize the
> name incorrectly in method callers.
>
> To solve this, you need to use git mv to tell git to use to updated name
> case when you rename methods without changing the spelling.
>
> John DeSoi, Ph.D.
>
>
> > On Jan 6, 2020, at 7:28 AM, Jeremy French  wrote:
> >
> > What type of issues have you found using a GIT GUI tool?
>
> **
> 4D Internet Users Group (4D iNUG)
> Archive:  http://lists.4d.com/archives.html
> Options: https://lists.4d.com/mailman/options/4d_tech
> Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
> **



-- 
On the first day, God created the heavens and the Earth
On the second day, God created the oceans.
On the third day, God put the animals on hold for a few hours,
   and did a little diving.
And God said, "This is good."
**
4D Internet Users Group (4D iNUG)
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: gitignore for projects

2020-01-06 Thread John DeSoi via 4D_Tech
Hi Jeremy,

One I used to use (GitX, I think) would sometimes take forever or spin the 
beach ball of death for large commits. I'm now using Fork (which I really 
like). Sometimes I get various commit or other operation errors. No problems 
from the command line.

Here is another issue to watch out for with 4D/Git. In 4D you can globally 
rename a method changing only the case and not the spelling. Even if 4D changes 
the method name on disk, git remembers the case of first commit. It is not seen 
as a change because by default on Mac and Windows file names are not case 
sensitive. So if you switch branches, git checks out the file name using 
whatever case you used initially. This cases 4D to tokenize the name 
incorrectly in method callers.

To solve this, you need to use git mv to tell git to use to updated name case 
when you rename methods without changing the spelling.

John DeSoi, Ph.D.


> On Jan 6, 2020, at 7:28 AM, Jeremy French  wrote:
> 
> What type of issues have you found using a GIT GUI tool?

**
4D Internet Users Group (4D iNUG)
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: gitignore for projects

2020-01-06 Thread Jeremy French via 4D_Tech
Hi John,

What type of issues have you found using a GIT GUI tool?

- Jeremy French


> On Jan 5, 2020, at 9:45 PM, John DeSoi via 4D_Tech <4d_tech@lists.4d.com> 
> wrote:
> 
> Is this with git directly from the command line or using a GUI tool? I have 
> seen many performance and other issues using a GUI tool that go away when I 
> execute the same operations from the command line.

**
4D Internet Users Group (4D iNUG)
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: gitignore for projects

2020-01-06 Thread Tom Benedict via 4D_Tech
On Jan 5, 2020, at 18:45, John DeSoi via 4D_Tech <4d_tech@lists.4d.com> wrote:
> 
> Hi Tom,
> 
>> Also, my attempt at setting up a git repository ran into similar performance 
>> issues, only hours instead of minutes. I lost interest at that point. 
> 
> Is this with git directly from the command line or using a GUI tool? I have 
> seen many performance and other issues using a GUI tool that go away when I 
> execute the same operations from the command line.
> 
Indeed, I’ve only used GitHub Desktop. I need to give the command line a try. 
The steps outlined in the 4D Blog posting will be a good place for me to start.

Thanks,

Tom Benedict
**
4D Internet Users Group (4D iNUG)
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: Reading and writing large arrays to disk

2020-01-06 Thread Charles Miller via 4D_Tech
Th e overhead for writing 1 k is the same as writing 1 meg to disk. Writing
one element at a time will take much longer

Regards
Chuck

On Mon, Jan 6, 2020 at 4:18 AM Peter Bozek via 4D_Tech <4d_tech@lists.4d.com>
wrote:

> Mitch,
>
> If the speed is the issue, you may try to write/read array elements to disk
> using SEND PACKET / RECEIVE PACKET. If you want to speed up reading, insert
> number of elements into first line, like
>
> Create document
> SEND PACKET(file;Size of array(arrayElement)+"\n")
> loop for all elements in array
>   SEND PACKET(file;arrayElement{}+"\n")
> end loop
> Close Document
>
> and vice versa
>
> Open document
> RECEIVE PACKET(file;$t;"\n")
> arraySize:=Num($t)
> array text(arrayElement;arraySize)
> for 1..arraySize
>   RECEIVE PACKET(file;$t;"\n")
>   arrayElement{}:=$t
> end for
> Close Document
>
> The above should avoid a part, where you (or 4D) is resizing a big memory
> block to insert additional array elements.
>
> --
> Peter Bozek
>
>
> On Mon, Jan 6, 2020 at 5:09 AM Mitchell Shiller via 4D_Tech <
> 4d_tech@lists.4d.com> wrote:
>
> > Hi,
> >
> > I have large string arrays (about 200 k elements).
> > I need to write and read them to disk.
> > Speed is the most important criteria.
> >
> > Options
> > 1) Create a text variable (loop with CR delimiter) and then TEXT TO
> > DOCUMENT.
> > 2) VARIABLE TO BLOB, COMPRESS BLOB, BLOB TO DOCUMENT
> > 3) OB SET ARRAY, JSON STRINGIFY, TEXT TO DOCUMENT
> >
> > Obviously using the reverse commands to read from disk.
> > Option one seems by far the slowest.
> >
> > Anyone know if 2 or 3 is faster?
> > Does the COMPRESS BLOB help? The resultant file is about 2 MB in size
> > without compression.
> > If yes to COMPRESS BLOB , Any différence from native 4D vs. GZIP
> > compression?
> >
> > Thanks as always.
> >
> > Mitch
> >
> >
> >
> > Sent from my iPad
> > **
> > 4D Internet Users Group (4D iNUG)
> > Archive:  http://lists.4d.com/archives.html
> > Options: https://lists.4d.com/mailman/options/4d_tech
> > Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
> > **
>
>
>
> --
> --
>
> Peter Bozek
> **
> 4D Internet Users Group (4D iNUG)
> Archive:  http://lists.4d.com/archives.html
> Options: https://lists.4d.com/mailman/options/4d_tech
> Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
> **

-- 
-
 Chuck Miller Voice: (617) 739-0306 Fax: (617) 232-1064
 Informed Solutions, Inc.
 Brookline, MA 02446 USA Registered 4D Developer
   Providers of 4D, Sybase & SQL Server connectivity
  https://www.informed-solutions.com
-
This message and any attached documents contain information which may be
confidential, subject to privilege or exempt from disclosure under
applicable law.  These materials are intended only for the use of the
intended recipient. If you are not the intended recipient of this
transmission, you are hereby notified that any distribution, disclosure,
printing, copying, storage, modification or the taking of any action in
reliance upon this transmission is strictly prohibited.  Delivery of this
message to any person other than the intended recipient shall not
compromise or waive such confidentiality, privilege or exemption from
disclosure as to this communication.
**
4D Internet Users Group (4D iNUG)
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: Reading and writing large arrays to disk

2020-01-06 Thread Peter Bozek via 4D_Tech
Mitch,

If the speed is the issue, you may try to write/read array elements to disk
using SEND PACKET / RECEIVE PACKET. If you want to speed up reading, insert
number of elements into first line, like

Create document
SEND PACKET(file;Size of array(arrayElement)+"\n")
loop for all elements in array
  SEND PACKET(file;arrayElement{}+"\n")
end loop
Close Document

and vice versa

Open document
RECEIVE PACKET(file;$t;"\n")
arraySize:=Num($t)
array text(arrayElement;arraySize)
for 1..arraySize
  RECEIVE PACKET(file;$t;"\n")
  arrayElement{}:=$t
end for
Close Document

The above should avoid a part, where you (or 4D) is resizing a big memory
block to insert additional array elements.

--
Peter Bozek


On Mon, Jan 6, 2020 at 5:09 AM Mitchell Shiller via 4D_Tech <
4d_tech@lists.4d.com> wrote:

> Hi,
>
> I have large string arrays (about 200 k elements).
> I need to write and read them to disk.
> Speed is the most important criteria.
>
> Options
> 1) Create a text variable (loop with CR delimiter) and then TEXT TO
> DOCUMENT.
> 2) VARIABLE TO BLOB, COMPRESS BLOB, BLOB TO DOCUMENT
> 3) OB SET ARRAY, JSON STRINGIFY, TEXT TO DOCUMENT
>
> Obviously using the reverse commands to read from disk.
> Option one seems by far the slowest.
>
> Anyone know if 2 or 3 is faster?
> Does the COMPRESS BLOB help? The resultant file is about 2 MB in size
> without compression.
> If yes to COMPRESS BLOB , Any différence from native 4D vs. GZIP
> compression?
>
> Thanks as always.
>
> Mitch
>
>
>
> Sent from my iPad
> **
> 4D Internet Users Group (4D iNUG)
> Archive:  http://lists.4d.com/archives.html
> Options: https://lists.4d.com/mailman/options/4d_tech
> Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
> **



-- 
--

Peter Bozek
**
4D Internet Users Group (4D iNUG)
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: v17 64-bit -- error opening datafile on MacOS file server

2020-01-06 Thread Koen Van Hooreweghe via 4D_Tech
Hi Allan,

I've seen a similar issue when opening a 4D database which resides on a mounted 
disk (macOS, both structure and data on the remote). This is a confirmed bug 
(ACI0099881) in 4D which started from v17R4 (and R5+R6). Unfortunately this bug 
will not get fixed in v17. But I can confirm it is fixed in the v18 beta 
versions.

So, either go back to 4D v17.3 (not R-version) or v17R3 (but miss a few new 
features) or move forward to v18 beta.

Kind regards,
Koen

> Op 4 jan. 2020, om 04:25 heeft Allan Udy via 4D_Tech <4d_tech@lists.4d.com> 
> het volgende geschreven:
> 
> We use a couple of very small databases in-house, where the datafile is 
> stored on a file server (MacOS), and various people will periodically access 
> the system and then quit.
> 
> Just upgraded one of these to v17R6 64-bit (from v17R4 32-bit)    at 
> startup a record is written into the datafile and this action immediately 
> throws a - error:  "No more room to save the record."
> 
> If we copy the datafile off the file server onto a local hard disk, the 4D 
> v17R6 64-bit version of the app opens the datafile perfectly fine, with no 
> error.
> 
> It appears that this is a 64-bit problem, as the earlier v17R4 32-bit version 
> works fine, where-as R4 and R5 64-bit also display the same error.
> 
> I've tried with two different client machines (both Macs), and two different 
> file server -- same result.
> 
> Anyone have any suggestions as to what may be going on here?  What am I 
> missing?




Compass bvba
Koen Van Hooreweghe
Kloosterstraat 65
9910 Aalter
Belgium
tel +32 495 511.653

**
4D Internet Users Group (4D iNUG)
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: Reading and writing large arrays to disk

2020-01-06 Thread Arnaud init5 imap via 4D_Tech


> Le 6 janv. 2020 à 05:48, Kirk Brooks via 4D_Tech <4d_tech@lists.4d.com> a 
> écrit :
> 
> Miyako,
> Nice explanation.
> 
> More or less - where do you think the inflection point is where simple text
> concatenation becomes less efficient than adding the text to a BLOB?

Hi Kirk, 
something seems to happen around 1024K:
>

-- 
Arnaud de Montard 



**
4D Internet Users Group (4D iNUG)
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: Reading and writing large arrays to disk

2020-01-06 Thread Arnaud init5 imap via 4D_Tech


> Le 6 janv. 2020 à 05:09, Mitchell Shiller via 4D_Tech <4d_tech@lists.4d.com> 
> a écrit :
> 
> Hi,
> 
> I have large string arrays (about 200 k elements).
> I need to write and read them to disk.
> Speed is the most important criteria.
> 
> Options
> 1) Create a text variable (loop with CR delimiter) and then TEXT TO DOCUMENT. 
> 2) VARIABLE TO BLOB, COMPRESS BLOB, BLOB TO DOCUMENT
> 3) OB SET ARRAY, JSON STRINGIFY, TEXT TO DOCUMENT 
> 
> Obviously using the reverse commands to read from disk.
> Option one seems by far the slowest.
> 
> Anyone know if 2 or 3 is faster?
> Does the COMPRESS BLOB help? The resultant file is about 2 MB in size without 
> compression.
> If yes to COMPRESS BLOB , Any différence from native 4D vs. GZIP compression?

Hi Mitch, 
if you're in a version before v17, see this:

it's blob based. 

But since v17, using the 'join' member function seems to be far better (see at 
the bottom of the same thread), schematically:
 array to collection($item_c;$array)
 $out_t:=$item_c.join("\r")

Or better, replace the array with a collection in the whole method:
 c_collection($item_c) 
 $item_c:=new collection
 loop(...)
   $item_c.push($value)
 end loop 

-- 
Arnaud de Montard 




**
4D Internet Users Group (4D iNUG)
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**