Re: [U2] Huge Dynamic Unidata file

2012-04-26 Thread Symeon Breen
Out of interest, how big is this file and how many records ?


-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Baakkonen, Rodney
A (Rod) 46K
Sent: 25 April 2012 21:30
To: 'U2 Users List'; jonathan.lec...@blairswindows.co.uk
Subject: Re: [U2] Huge Dynamic Unidata file

 
I think at the time I wrote this, Unidata 6 I think, I could not
even get memresize to handle overflow that was larger than 2 gigabytes. So I
thought the solution by Unidata was to create an overflow file for every dat
segment. I have never specified the OVERFLOW option when using memresize.
Plus, I rarely have enough space in a file system to handle 2 copies of one
of my big file. One work around for the file system space issue is to
symbolically link the original file in another file system. But that is a
lot of work when you have a lot of dat files. 



-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of
dean.armbrus...@ferguson.com
Sent: Wednesday, April 25, 2012 3:16 PM
To: u2-users@listserver.u2ug.org; jonathan.lec...@blairswindows.co.uk
Subject: Re: [U2] Huge Dynamic Unidata file

Were you using the OVERFLOW option with memresize?  If not, memresize should
not be creating the extra over files.  If memresize did create the extra
over files without the OVERFLOW option, then that would be a bug in
memresize. 

Dean Armbruster
System Analyst
757-989-2839

-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Baakkonen, Rodney
A (Rod) 46K
Sent: Wednesday, April 25, 2012 12:29
To: 'Jonathan Leckie'; 'U2 Users List'
Subject: Re: [U2] Huge Dynamic Unidata file


 I resize most of my Dynamic files this way. I don't like having a small
overxxx segment for every datxxx segment that memresize creates.. By
creating the new file my self, I don't have a lot of these small overxxx
segments that are never used.

I also wrote a process to Select the old file and create a SAVELIST.
Without going into to much detail, the process uses PHANTOM to spawn off a
number of Unidata copies. So there are a number of simultaneous processes
working to build the new file. Each PHANTOM Copy knows how many total
phantoms are working on the file and what sequence IT is within the total
number of PHANTOMS. Each Phantom handles part of the SAVELIST. Each PHANTOM
does the iteration below until the list is
exhausted:

1. QSELECT SAVEDLISTS listname000. (process 2 would use 001, process 3 would
use 002 etc) 2. COPY FROM old.file TO new.file 3. Increment the list counter
from 000 by the number of Phantoms and go back to step one to process a new
segment of the savelist.
4. If a process cannot find the next savelist segment in SAVEDLISTS, it is
done.

This process is almost as fast as memresize. You can control what file
systems are used for space reasons and you don't get scads of overxxx
segments in your FILE.NAME directory (one of my dynamic files has 39 dat
segments and just a over001). 

So I don't use memresize for dynamic files anymore. 

-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Jonathan Leckie
Sent: Wednesday, April 25, 2012 9:36 AM
To: u2-users@listserver.u2ug.org
Subject: [U2] Huge Dynamic Unidata file

I have a very large  file that I don't have enough free space to memresize,
however howabout I create  new dynamic (temporary) file and then copy all
the records (in ECL) to the new  file and then (unix) copy the temporary
file over the top of the  original.
 
Does that seem like  a sensible idea?
 
 
Regards
Jonathan Leckie
 



**
* This message has been scanned for viruses and dangerous content
* and is believed to be clean.
*

* This email and any files transmitted with it are confidential and
* intended solely for the use of the individual or entity to whom they
* are addressed.
*
* If you have received this email in error please notify us at Blairs,
* details can be found on our website http://www.blairswindows.co.uk
*
* Name  Registered Office:
*
* Blairs Windows Limited
* Registered office : 9 Baker Street, Greenock, PA15 4TU
* Company No: SC393935, V.A.T. registration No: 108729111
**
___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users


--
CONFIDENTIALITY NOTICE: If you have received this email in error, please
immediately notify the sender by e-mail at the address shown.  
This email transmission may contain confidential information.  This
information is intended only for the use of the individual(s) or entity to
whom it is intended even

Re: [U2] Huge Dynamic Unidata file

2012-04-26 Thread Baakkonen, Rodney A (Rod) 46K
 Indexes and all, 91 G with about 75 million records.

-Original Message-
From: u2-users-boun...@listserver.u2ug.org 
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Symeon Breen
Sent: Thursday, April 26, 2012 4:08 AM
To: 'U2 Users List'
Subject: Re: [U2] Huge Dynamic Unidata file

Out of interest, how big is this file and how many records ?


-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Baakkonen, Rodney
A (Rod) 46K
Sent: 25 April 2012 21:30
To: 'U2 Users List'; jonathan.lec...@blairswindows.co.uk
Subject: Re: [U2] Huge Dynamic Unidata file

 
I think at the time I wrote this, Unidata 6 I think, I could not
even get memresize to handle overflow that was larger than 2 gigabytes. So I
thought the solution by Unidata was to create an overflow file for every dat
segment. I have never specified the OVERFLOW option when using memresize.
Plus, I rarely have enough space in a file system to handle 2 copies of one
of my big file. One work around for the file system space issue is to
symbolically link the original file in another file system. But that is a
lot of work when you have a lot of dat files. 



-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of
dean.armbrus...@ferguson.com
Sent: Wednesday, April 25, 2012 3:16 PM
To: u2-users@listserver.u2ug.org; jonathan.lec...@blairswindows.co.uk
Subject: Re: [U2] Huge Dynamic Unidata file

Were you using the OVERFLOW option with memresize?  If not, memresize should
not be creating the extra over files.  If memresize did create the extra
over files without the OVERFLOW option, then that would be a bug in
memresize. 

Dean Armbruster
System Analyst
757-989-2839

-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Baakkonen, Rodney
A (Rod) 46K
Sent: Wednesday, April 25, 2012 12:29
To: 'Jonathan Leckie'; 'U2 Users List'
Subject: Re: [U2] Huge Dynamic Unidata file


 I resize most of my Dynamic files this way. I don't like having a small
overxxx segment for every datxxx segment that memresize creates.. By
creating the new file my self, I don't have a lot of these small overxxx
segments that are never used.

I also wrote a process to Select the old file and create a SAVELIST.
Without going into to much detail, the process uses PHANTOM to spawn off a
number of Unidata copies. So there are a number of simultaneous processes
working to build the new file. Each PHANTOM Copy knows how many total
phantoms are working on the file and what sequence IT is within the total
number of PHANTOMS. Each Phantom handles part of the SAVELIST. Each PHANTOM
does the iteration below until the list is
exhausted:

1. QSELECT SAVEDLISTS listname000. (process 2 would use 001, process 3 would
use 002 etc) 2. COPY FROM old.file TO new.file 3. Increment the list counter
from 000 by the number of Phantoms and go back to step one to process a new
segment of the savelist.
4. If a process cannot find the next savelist segment in SAVEDLISTS, it is
done.

This process is almost as fast as memresize. You can control what file
systems are used for space reasons and you don't get scads of overxxx
segments in your FILE.NAME directory (one of my dynamic files has 39 dat
segments and just a over001). 

So I don't use memresize for dynamic files anymore. 

-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Jonathan Leckie
Sent: Wednesday, April 25, 2012 9:36 AM
To: u2-users@listserver.u2ug.org
Subject: [U2] Huge Dynamic Unidata file

I have a very large  file that I don't have enough free space to memresize,
however howabout I create  new dynamic (temporary) file and then copy all
the records (in ECL) to the new  file and then (unix) copy the temporary
file over the top of the  original.
 
Does that seem like  a sensible idea?
 
 
Regards
Jonathan Leckie
 



**
* This message has been scanned for viruses and dangerous content
* and is believed to be clean.
*

* This email and any files transmitted with it are confidential and
* intended solely for the use of the individual or entity to whom they
* are addressed.
*
* If you have received this email in error please notify us at Blairs,
* details can be found on our website http://www.blairswindows.co.uk
*
* Name  Registered Office:
*
* Blairs Windows Limited
* Registered office : 9 Baker Street, Greenock, PA15 4TU
* Company No: SC393935, V.A.T. registration No: 108729111
**
___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users

Re: [U2] Huge Dynamic Unidata file

2012-04-26 Thread Symeon Breen
That's is pretty big, my personal experience with big files on udt was up to
about 60Gig - we did use memresize no problems - but had to set the TMPPATH
to another drive.



-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Baakkonen, Rodney
A (Rod) 46K
Sent: 26 April 2012 12:00
To: 'U2 Users List'
Subject: Re: [U2] Huge Dynamic Unidata file

 Indexes and all, 91 G with about 75 million records.

-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Symeon Breen
Sent: Thursday, April 26, 2012 4:08 AM
To: 'U2 Users List'
Subject: Re: [U2] Huge Dynamic Unidata file

Out of interest, how big is this file and how many records ?


-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Baakkonen, Rodney
A (Rod) 46K
Sent: 25 April 2012 21:30
To: 'U2 Users List'; jonathan.lec...@blairswindows.co.uk
Subject: Re: [U2] Huge Dynamic Unidata file

 
I think at the time I wrote this, Unidata 6 I think, I could not
even get memresize to handle overflow that was larger than 2 gigabytes. So I
thought the solution by Unidata was to create an overflow file for every dat
segment. I have never specified the OVERFLOW option when using memresize.
Plus, I rarely have enough space in a file system to handle 2 copies of one
of my big file. One work around for the file system space issue is to
symbolically link the original file in another file system. But that is a
lot of work when you have a lot of dat files. 



-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of
dean.armbrus...@ferguson.com
Sent: Wednesday, April 25, 2012 3:16 PM
To: u2-users@listserver.u2ug.org; jonathan.lec...@blairswindows.co.uk
Subject: Re: [U2] Huge Dynamic Unidata file

Were you using the OVERFLOW option with memresize?  If not, memresize should
not be creating the extra over files.  If memresize did create the extra
over files without the OVERFLOW option, then that would be a bug in
memresize. 

Dean Armbruster
System Analyst
757-989-2839

-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Baakkonen, Rodney
A (Rod) 46K
Sent: Wednesday, April 25, 2012 12:29
To: 'Jonathan Leckie'; 'U2 Users List'
Subject: Re: [U2] Huge Dynamic Unidata file


 I resize most of my Dynamic files this way. I don't like having a small
overxxx segment for every datxxx segment that memresize creates.. By
creating the new file my self, I don't have a lot of these small overxxx
segments that are never used.

I also wrote a process to Select the old file and create a SAVELIST.
Without going into to much detail, the process uses PHANTOM to spawn off a
number of Unidata copies. So there are a number of simultaneous processes
working to build the new file. Each PHANTOM Copy knows how many total
phantoms are working on the file and what sequence IT is within the total
number of PHANTOMS. Each Phantom handles part of the SAVELIST. Each PHANTOM
does the iteration below until the list is
exhausted:

1. QSELECT SAVEDLISTS listname000. (process 2 would use 001, process 3 would
use 002 etc) 2. COPY FROM old.file TO new.file 3. Increment the list counter
from 000 by the number of Phantoms and go back to step one to process a new
segment of the savelist.
4. If a process cannot find the next savelist segment in SAVEDLISTS, it is
done.

This process is almost as fast as memresize. You can control what file
systems are used for space reasons and you don't get scads of overxxx
segments in your FILE.NAME directory (one of my dynamic files has 39 dat
segments and just a over001). 

So I don't use memresize for dynamic files anymore. 

-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Jonathan Leckie
Sent: Wednesday, April 25, 2012 9:36 AM
To: u2-users@listserver.u2ug.org
Subject: [U2] Huge Dynamic Unidata file

I have a very large  file that I don't have enough free space to memresize,
however howabout I create  new dynamic (temporary) file and then copy all
the records (in ECL) to the new  file and then (unix) copy the temporary
file over the top of the  original.
 
Does that seem like  a sensible idea?
 
 
Regards
Jonathan Leckie
 



**
* This message has been scanned for viruses and dangerous content
* and is believed to be clean.
*

* This email and any files transmitted with it are confidential and
* intended solely for the use of the individual or entity to whom they
* are addressed.
*
* If you have received this email in error please notify us at Blairs,
* details can be found on our website http://www.blairswindows.co.uk
*
* Name  Registered Office:
*
* Blairs

Re: [U2] Huge Dynamic Unidata file

2012-04-26 Thread Symeon Breen
You are of course right - in my defence it was a few years ago ;)



-Original Message-
From: u2-users-boun...@listserver.u2ug.org 
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Baakkonen, Rodney A 
(Rod) 46K
Sent: 26 April 2012 12:21
To: 'U2 Users List'
Subject: Re: [U2] Huge Dynamic Unidata file

 I thought TMPPATH was not valid for Dynamic files:

?? The TMPPATH option is invalid if any DYNAMIC options are specified (or if 
the starting file is dynamic and no file type options are specified).

-Original Message-
From: u2-users-boun...@listserver.u2ug.org 
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Symeon Breen
Sent: Thursday, April 26, 2012 6:18 AM
To: 'U2 Users List'
Subject: Re: [U2] Huge Dynamic Unidata file

That's is pretty big, my personal experience with big files on udt was up to 
about 60Gig - we did use memresize no problems - but had to set the TMPPATH to 
another drive.



-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Baakkonen, Rodney A 
(Rod) 46K
Sent: 26 April 2012 12:00
To: 'U2 Users List'
Subject: Re: [U2] Huge Dynamic Unidata file

 Indexes and all, 91 G with about 75 million records.

-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Symeon Breen
Sent: Thursday, April 26, 2012 4:08 AM
To: 'U2 Users List'
Subject: Re: [U2] Huge Dynamic Unidata file

Out of interest, how big is this file and how many records ?


-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Baakkonen, Rodney A 
(Rod) 46K
Sent: 25 April 2012 21:30
To: 'U2 Users List'; jonathan.lec...@blairswindows.co.uk
Subject: Re: [U2] Huge Dynamic Unidata file

 
I think at the time I wrote this, Unidata 6 I think, I could not even 
get memresize to handle overflow that was larger than 2 gigabytes. So I thought 
the solution by Unidata was to create an overflow file for every dat segment. I 
have never specified the OVERFLOW option when using memresize.
Plus, I rarely have enough space in a file system to handle 2 copies of one of 
my big file. One work around for the file system space issue is to symbolically 
link the original file in another file system. But that is a lot of work when 
you have a lot of dat files. 



-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of 
dean.armbrus...@ferguson.com
Sent: Wednesday, April 25, 2012 3:16 PM
To: u2-users@listserver.u2ug.org; jonathan.lec...@blairswindows.co.uk
Subject: Re: [U2] Huge Dynamic Unidata file

Were you using the OVERFLOW option with memresize?  If not, memresize should 
not be creating the extra over files.  If memresize did create the extra over 
files without the OVERFLOW option, then that would be a bug in memresize. 

Dean Armbruster
System Analyst
757-989-2839

-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Baakkonen, Rodney A 
(Rod) 46K
Sent: Wednesday, April 25, 2012 12:29
To: 'Jonathan Leckie'; 'U2 Users List'
Subject: Re: [U2] Huge Dynamic Unidata file


 I resize most of my Dynamic files this way. I don't like having a small 
overxxx segment for every datxxx segment that memresize creates.. By creating 
the new file my self, I don't have a lot of these small overxxx segments that 
are never used.

I also wrote a process to Select the old file and create a SAVELIST.
Without going into to much detail, the process uses PHANTOM to spawn off a 
number of Unidata copies. So there are a number of simultaneous processes 
working to build the new file. Each PHANTOM Copy knows how many total phantoms 
are working on the file and what sequence IT is within the total number of 
PHANTOMS. Each Phantom handles part of the SAVELIST. Each PHANTOM does the 
iteration below until the list is
exhausted:

1. QSELECT SAVEDLISTS listname000. (process 2 would use 001, process 3 would 
use 002 etc) 2. COPY FROM old.file TO new.file 3. Increment the list counter 
from 000 by the number of Phantoms and go back to step one to process a new 
segment of the savelist.
4. If a process cannot find the next savelist segment in SAVEDLISTS, it is done.

This process is almost as fast as memresize. You can control what file systems 
are used for space reasons and you don't get scads of overxxx segments in your 
FILE.NAME directory (one of my dynamic files has 39 dat segments and just a 
over001). 

So I don't use memresize for dynamic files anymore. 

-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Jonathan Leckie
Sent: Wednesday, April 25, 2012 9:36 AM
To: u2-users@listserver.u2ug.org
Subject: [U2] Huge Dynamic Unidata file

I have a very large  file that I don't have

[U2] Huge Dynamic Unidata file

2012-04-25 Thread Jonathan Leckie
I have a very large  file that I don't have enough free space to memresize, 
however howabout I create  new dynamic (temporary) file and then copy all the 
records (in ECL) to the new  file and then (unix) copy the temporary file over 
the top of the  original.
 
Does that seem like  a sensible idea?
 
 
Regards
Jonathan Leckie
 



**
* This message has been scanned for viruses and dangerous content
* and is believed to be clean.
*   
* This email and any files transmitted with it are confidential and 
* intended solely for the use of the individual or entity to whom they
* are addressed.
*
* If you have received this email in error please notify us at Blairs,
* details can be found on our website http://www.blairswindows.co.uk
*
* Name  Registered Office:
*
* Blairs Windows Limited
* Registered office : 9 Baker Street, Greenock, PA15 4TU
* Company No: SC393935, V.A.T. registration No: 108729111
**
___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users


Re: [U2] Huge Dynamic Unidata file

2012-04-25 Thread Dave Henderson
Hi,

That will work, I have done that many times.
Just make sure you check the permissions at unix level after the copy.

Dave

 I have a very large  file that I don't have enough free space to memresize,
 however howabout I create  new dynamic (temporary) file and then copy all the
 records (in ECL) to the new  file and then (unix) copy the temporary file
 over the top of the  original.
  
 Does that seem like  a sensible idea?
  
  
 Regards
 Jonathan Leckie
  
 
 
 
 **
 * This message has been scanned for viruses and dangerous content
 * and is believed to be clean.
 *   
 * This email and any files transmitted with it are confidential and 
 * intended solely for the use of the individual or entity to whom they
 * are addressed.
 *
 * If you have received this email in error please notify us at Blairs,
 * details can be found on our website http://www.blairswindows.co.uk
 *
 * Name  Registered Office:
 *
 * Blairs Windows Limited
 * Registered office : 9 Baker Street, Greenock, PA15 4TU
 * Company No: SC393935, V.A.T. registration No: 108729111
 **
 ___
 U2-Users mailing list
 U2-Users@listserver.u2ug.org
 http://listserver.u2ug.org/mailman/listinfo/u2-users
 


-- 

___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users


Re: [U2] Huge Dynamic Unidata file

2012-04-25 Thread John Jenkins
 We've recently added a new UniData tuneable to udtconfig UDT_SPLIT_POLICY 
which can help conserve space when an overflowed dynamic file splits. The total 
size of the contents are not necessarily the same as the physical file size. 
Always worth checking with guide and the latest fixes and changes for this 
change if you have a stake here as the new split policy needs to be positively 
chosen.

Regards

JayJay

On 25 Apr 2012, at 15:39, Dave Henderson dave_hender...@dsl.pipex.com wrote:

 Hi,
 
 That will work, I have done that many times.
 Just make sure you check the permissions at unix level after the copy.
 
 Dave
 
 I have a very large  file that I don't have enough free space to memresize,
 however howabout I create  new dynamic (temporary) file and then copy all the
 records (in ECL) to the new  file and then (unix) copy the temporary file
 over the top of the  original.
 
 Does that seem like  a sensible idea?
 
 
 Regards
 Jonathan Leckie
 
 
 
 
 **
 * This message has been scanned for viruses and dangerous content
 * and is believed to be clean.
 *   
 * This email and any files transmitted with it are confidential and 
 * intended solely for the use of the individual or entity to whom they
 * are addressed.
 *
 * If you have received this email in error please notify us at Blairs,
 * details can be found on our website http://www.blairswindows.co.uk
 *
 * Name  Registered Office:
 *
 * Blairs Windows Limited
 * Registered office : 9 Baker Street, Greenock, PA15 4TU
 * Company No: SC393935, V.A.T. registration No: 108729111
 **
 ___
 U2-Users mailing list
 U2-Users@listserver.u2ug.org
 http://listserver.u2ug.org/mailman/listinfo/u2-users
 
 
 
 -- 
 
 ___
 U2-Users mailing list
 U2-Users@listserver.u2ug.org
 http://listserver.u2ug.org/mailman/listinfo/u2-users
___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users


Re: [U2] Huge Dynamic Unidata file

2012-04-25 Thread Israel, John R.
Also be sure that anyone that could have access to that file logs out/in before 
re-accessing the new file.

John

-Original Message-
From: u2-users-boun...@listserver.u2ug.org 
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Dave Henderson
Sent: Wednesday, April 25, 2012 10:40 AM
To: Jonathan Leckie; U2 Users List
Cc: u2-users@listserver.u2ug.org
Subject: Re: [U2] Huge Dynamic Unidata file

Hi,

That will work, I have done that many times.
Just make sure you check the permissions at unix level after the copy.

Dave

 I have a very large  file that I don't have enough free space to 
 memresize, however howabout I create  new dynamic (temporary) file and 
 then copy all the records (in ECL) to the new  file and then (unix) 
 copy the temporary file over the top of the  original.
  
 Does that seem like  a sensible idea?
  
  
 Regards
 Jonathan Leckie
  
 
 
 
 **
 * This message has been scanned for viruses and dangerous content
 * and is believed to be clean.
 *   
 * This email and any files transmitted with it are confidential and
 * intended solely for the use of the individual or entity to whom they
 * are addressed.
 *
 * If you have received this email in error please notify us at Blairs,
 * details can be found on our website http://www.blairswindows.co.uk
 *
 * Name  Registered Office:
 *
 * Blairs Windows Limited
 * Registered office : 9 Baker Street, Greenock, PA15 4TU
 * Company No: SC393935, V.A.T. registration No: 108729111
 **
 ___
 U2-Users mailing list
 U2-Users@listserver.u2ug.org
 http://listserver.u2ug.org/mailman/listinfo/u2-users
 


-- 

___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users

___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users


Re: [U2] Huge Dynamic Unidata file

2012-04-25 Thread Baakkonen, Rodney A (Rod) 46K
 I resize most of my Dynamic files this way. I don't like having a small 
overxxx segment for every datxxx segment that memresize creates.. By creating 
the new file my self, I don't have a lot of these small overxxx segments that 
are never used.

I also wrote a process to Select the old file and create a SAVELIST. Without 
going into to much detail, the process uses PHANTOM to spawn off a number of 
Unidata copies. So there are a number of simultaneous processes working to 
build the new file. Each PHANTOM Copy knows how many total phantoms are working 
on the file and what sequence IT is within the total number of PHANTOMS. Each 
Phantom handles part of the SAVELIST. Each PHANTOM does the iteration below 
until the list is exhausted:

1. QSELECT SAVEDLISTS listname000. (process 2 would use 001, process 3 would 
use 002 etc)
2. COPY FROM old.file TO new.file
3. Increment the list counter from 000 by the number of Phantoms and go back to 
step one to process a new segment of the savelist.
4. If a process cannot find the next savelist segment in SAVEDLISTS, it is done.

This process is almost as fast as memresize. You can control what file systems 
are used for space reasons and you don't get scads of overxxx segments in your 
FILE.NAME directory (one of my dynamic files has 39 dat segments and just a 
over001). 

So I don't use memresize for dynamic files anymore. 

-Original Message-
From: u2-users-boun...@listserver.u2ug.org 
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Jonathan Leckie
Sent: Wednesday, April 25, 2012 9:36 AM
To: u2-users@listserver.u2ug.org
Subject: [U2] Huge Dynamic Unidata file

I have a very large  file that I don't have enough free space to memresize, 
however howabout I create  new dynamic (temporary) file and then copy all the 
records (in ECL) to the new  file and then (unix) copy the temporary file over 
the top of the  original.
 
Does that seem like  a sensible idea?
 
 
Regards
Jonathan Leckie
 



**
* This message has been scanned for viruses and dangerous content
* and is believed to be clean.
*   
* This email and any files transmitted with it are confidential and 
* intended solely for the use of the individual or entity to whom they
* are addressed.
*
* If you have received this email in error please notify us at Blairs,
* details can be found on our website http://www.blairswindows.co.uk
*
* Name  Registered Office:
*
* Blairs Windows Limited
* Registered office : 9 Baker Street, Greenock, PA15 4TU
* Company No: SC393935, V.A.T. registration No: 108729111
**
___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users

--
CONFIDENTIALITY NOTICE: If you have received this email in error,
please immediately notify the sender by e-mail at the address shown.  
This email transmission may contain confidential information.  This 
information is intended only for the use of the individual(s) or entity to 
whom it is intended even if addressed incorrectly.  Please delete it from 
your files if you are not the intended recipient.  Thank you for your 
compliance.  Copyright (c) 2012 Cigna
==

___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users


Re: [U2] Huge Dynamic Unidata file

2012-04-25 Thread Dean.Armbruster
Were you using the OVERFLOW option with memresize?  If not, memresize
should not be creating the extra over files.  If memresize did create
the extra over files without the OVERFLOW option, then that would be a
bug in memresize. 

Dean Armbruster
System Analyst
757-989-2839

-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Baakkonen,
Rodney A (Rod) 46K
Sent: Wednesday, April 25, 2012 12:29
To: 'Jonathan Leckie'; 'U2 Users List'
Subject: Re: [U2] Huge Dynamic Unidata file


 I resize most of my Dynamic files this way. I don't like having a small
overxxx segment for every datxxx segment that memresize creates.. By
creating the new file my self, I don't have a lot of these small overxxx
segments that are never used.

I also wrote a process to Select the old file and create a SAVELIST.
Without going into to much detail, the process uses PHANTOM to spawn off
a number of Unidata copies. So there are a number of simultaneous
processes working to build the new file. Each PHANTOM Copy knows how
many total phantoms are working on the file and what sequence IT is
within the total number of PHANTOMS. Each Phantom handles part of the
SAVELIST. Each PHANTOM does the iteration below until the list is
exhausted:

1. QSELECT SAVEDLISTS listname000. (process 2 would use 001, process 3
would use 002 etc)
2. COPY FROM old.file TO new.file
3. Increment the list counter from 000 by the number of Phantoms and go
back to step one to process a new segment of the savelist.
4. If a process cannot find the next savelist segment in SAVEDLISTS, it
is done.

This process is almost as fast as memresize. You can control what file
systems are used for space reasons and you don't get scads of overxxx
segments in your FILE.NAME directory (one of my dynamic files has 39 dat
segments and just a over001). 

So I don't use memresize for dynamic files anymore. 

-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Jonathan
Leckie
Sent: Wednesday, April 25, 2012 9:36 AM
To: u2-users@listserver.u2ug.org
Subject: [U2] Huge Dynamic Unidata file

I have a very large  file that I don't have enough free space to
memresize, however howabout I create  new dynamic (temporary) file and
then copy all the records (in ECL) to the new  file and then (unix) copy
the temporary file over the top of the  original.
 
Does that seem like  a sensible idea?
 
 
Regards
Jonathan Leckie
 



**
* This message has been scanned for viruses and dangerous content
* and is believed to be clean.
*

* This email and any files transmitted with it are confidential and 
* intended solely for the use of the individual or entity to whom they
* are addressed.
*
* If you have received this email in error please notify us at Blairs,
* details can be found on our website http://www.blairswindows.co.uk
*
* Name  Registered Office:
*
* Blairs Windows Limited
* Registered office : 9 Baker Street, Greenock, PA15 4TU
* Company No: SC393935, V.A.T. registration No: 108729111
**
___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users


--
CONFIDENTIALITY NOTICE: If you have received this email in error,
please immediately notify the sender by e-mail at the address shown.  
This email transmission may contain confidential information.  This 
information is intended only for the use of the individual(s) or entity
to 
whom it is intended even if addressed incorrectly.  Please delete it
from 
your files if you are not the intended recipient.  Thank you for your 
compliance.  Copyright (c) 2012 Cigna

==

___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users
___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users


Re: [U2] Huge Dynamic Unidata file

2012-04-25 Thread Wols Lists
On 25/04/12 15:46, John Jenkins wrote:
  We've recently added a new UniData tuneable to udtconfig UDT_SPLIT_POLICY 
 which can help conserve space when an overflowed dynamic file splits. The 
 total size of the contents are not necessarily the same as the physical file 
 size. Always worth checking with guide and the latest fixes and changes for 
 this change if you have a stake here as the new split policy needs to be 
 positively chosen.
 
 Regards
 
 JayJay
 
Dunno Unidata, but if you're copying to a dynamic file, would it make
sense to use MINIMUM_MODULUS on the new file? JayJay, you'd know far
more than me about this, but in UniVerse, it makes sense to use it if
your file is not going to shrink and you're creating it specifically to
populate it with a large amount of pre-existing data.

Cheers,
Wol

___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users


Re: [U2] Huge Dynamic Unidata file

2012-04-25 Thread Baakkonen, Rodney A (Rod) 46K
 
I think at the time I wrote this, Unidata 6 I think, I could not even 
get memresize to handle overflow that was larger than 2 gigabytes. So I thought 
the solution by Unidata was to create an overflow file for every dat segment. I 
have never specified the OVERFLOW option when using memresize. Plus, I rarely 
have enough space in a file system to handle 2 copies of one of my big file. 
One work around for the file system space issue is to symbolically link the 
original file in another file system. But that is a lot of work when you have a 
lot of dat files. 



-Original Message-
From: u2-users-boun...@listserver.u2ug.org 
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of 
dean.armbrus...@ferguson.com
Sent: Wednesday, April 25, 2012 3:16 PM
To: u2-users@listserver.u2ug.org; jonathan.lec...@blairswindows.co.uk
Subject: Re: [U2] Huge Dynamic Unidata file

Were you using the OVERFLOW option with memresize?  If not, memresize
should not be creating the extra over files.  If memresize did create
the extra over files without the OVERFLOW option, then that would be a
bug in memresize. 

Dean Armbruster
System Analyst
757-989-2839

-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Baakkonen,
Rodney A (Rod) 46K
Sent: Wednesday, April 25, 2012 12:29
To: 'Jonathan Leckie'; 'U2 Users List'
Subject: Re: [U2] Huge Dynamic Unidata file


 I resize most of my Dynamic files this way. I don't like having a small
overxxx segment for every datxxx segment that memresize creates.. By
creating the new file my self, I don't have a lot of these small overxxx
segments that are never used.

I also wrote a process to Select the old file and create a SAVELIST.
Without going into to much detail, the process uses PHANTOM to spawn off
a number of Unidata copies. So there are a number of simultaneous
processes working to build the new file. Each PHANTOM Copy knows how
many total phantoms are working on the file and what sequence IT is
within the total number of PHANTOMS. Each Phantom handles part of the
SAVELIST. Each PHANTOM does the iteration below until the list is
exhausted:

1. QSELECT SAVEDLISTS listname000. (process 2 would use 001, process 3
would use 002 etc)
2. COPY FROM old.file TO new.file
3. Increment the list counter from 000 by the number of Phantoms and go
back to step one to process a new segment of the savelist.
4. If a process cannot find the next savelist segment in SAVEDLISTS, it
is done.

This process is almost as fast as memresize. You can control what file
systems are used for space reasons and you don't get scads of overxxx
segments in your FILE.NAME directory (one of my dynamic files has 39 dat
segments and just a over001). 

So I don't use memresize for dynamic files anymore. 

-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Jonathan
Leckie
Sent: Wednesday, April 25, 2012 9:36 AM
To: u2-users@listserver.u2ug.org
Subject: [U2] Huge Dynamic Unidata file

I have a very large  file that I don't have enough free space to
memresize, however howabout I create  new dynamic (temporary) file and
then copy all the records (in ECL) to the new  file and then (unix) copy
the temporary file over the top of the  original.
 
Does that seem like  a sensible idea?
 
 
Regards
Jonathan Leckie
 



**
* This message has been scanned for viruses and dangerous content
* and is believed to be clean.
*

* This email and any files transmitted with it are confidential and 
* intended solely for the use of the individual or entity to whom they
* are addressed.
*
* If you have received this email in error please notify us at Blairs,
* details can be found on our website http://www.blairswindows.co.uk
*
* Name  Registered Office:
*
* Blairs Windows Limited
* Registered office : 9 Baker Street, Greenock, PA15 4TU
* Company No: SC393935, V.A.T. registration No: 108729111
**
___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users


--
CONFIDENTIALITY NOTICE: If you have received this email in error,
please immediately notify the sender by e-mail at the address shown.  
This email transmission may contain confidential information.  This 
information is intended only for the use of the individual(s) or entity
to 
whom it is intended even if addressed incorrectly.  Please delete it
from 
your files if you are not the intended recipient.  Thank you for your 
compliance.  Copyright (c) 2012 Cigna

==

___
U2-Users mailing list
U2-Users@listserver.u2ug.org

Re: [U2] Huge Dynamic Unidata file

2012-04-25 Thread John Jenkins
Yes, absolutely - I'm a great believer in a minimum modulo. If I have to copy 
large files from one to another my preferred method is to drive a load of 
PHANTOMs with save-lists in parallel. Also useful to drive from a SELECT not a 
SSELECT so you get a file-sequential order by chunking the input list as well 
so each PHANTOM gets a file-sequential ordered set of input data to process. 
Large disk caches can futz this idea a little, but it still holds true in 
general terms.

I think someone else mentioned this as well recently.
 Regards

JayJay



On 25 Apr 2012, at 21:28, Wols Lists antli...@youngman.org.uk wrote:

 On 25/04/12 15:46, John Jenkins wrote:
 We've recently added a new UniData tuneable to udtconfig UDT_SPLIT_POLICY 
 which can help conserve space when an overflowed dynamic file splits. The 
 total size of the contents are not necessarily the same as the physical file 
 size. Always worth checking with guide and the latest fixes and changes 
 for this change if you have a stake here as the new split policy needs to be 
 positively chosen.
 
 Regards
 
 JayJay
 
 Dunno Unidata, but if you're copying to a dynamic file, would it make
 sense to use MINIMUM_MODULUS on the new file? JayJay, you'd know far
 more than me about this, but in UniVerse, it makes sense to use it if
 your file is not going to shrink and you're creating it specifically to
 populate it with a large amount of pre-existing data.
 
 Cheers,
 Wol
 
 ___
 U2-Users mailing list
 U2-Users@listserver.u2ug.org
 http://listserver.u2ug.org/mailman/listinfo/u2-users
___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users


Re: [U2] Huge Dynamic Unidata file

2012-04-25 Thread John Jenkins
We are looking at options for larger files in UniData as there are some BIG 
data sets out there. The options that are being looked at include (and in no 
particular order):

1. 64 bit files  - over 2Gb
2. Distributed files (like UniVerse)
3. More dat/over/idx files - going from datnnn to dat

Each has pros and cons, and if there are other options let's hear them!
If you have a need and a preference then please speak up. 

Regards

JayJay

On 25 Apr 2012, at 21:29, Baakkonen, Rodney A (Rod)  46K 
rodney.baakko...@cigna.com wrote:

 
I think at the time I wrote this, Unidata 6 I think, I could not even get 
 memresize to handle overflow that was larger than 2 gigabytes. So I thought 
 the solution by Unidata was to create an overflow file for every dat segment. 
 I have never specified the OVERFLOW option when using memresize. Plus, I 
 rarely have enough space in a file system to handle 2 copies of one of my big 
 file. One work around for the file system space issue is to symbolically link 
 the original file in another file system. But that is a lot of work when you 
 have a lot of dat files. 
 
 
 
 -Original Message-
 From: u2-users-boun...@listserver.u2ug.org 
 [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of 
 dean.armbrus...@ferguson.com
 Sent: Wednesday, April 25, 2012 3:16 PM
 To: u2-users@listserver.u2ug.org; jonathan.lec...@blairswindows.co.uk
 Subject: Re: [U2] Huge Dynamic Unidata file
 
 Were you using the OVERFLOW option with memresize?  If not, memresize
 should not be creating the extra over files.  If memresize did create
 the extra over files without the OVERFLOW option, then that would be a
 bug in memresize. 
 
 Dean Armbruster
 System Analyst
 757-989-2839
 
 -Original Message-
 From: u2-users-boun...@listserver.u2ug.org
 [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Baakkonen,
 Rodney A (Rod) 46K
 Sent: Wednesday, April 25, 2012 12:29
 To: 'Jonathan Leckie'; 'U2 Users List'
 Subject: Re: [U2] Huge Dynamic Unidata file
 
 
 I resize most of my Dynamic files this way. I don't like having a small
 overxxx segment for every datxxx segment that memresize creates.. By
 creating the new file my self, I don't have a lot of these small overxxx
 segments that are never used.
 
 I also wrote a process to Select the old file and create a SAVELIST.
 Without going into to much detail, the process uses PHANTOM to spawn off
 a number of Unidata copies. So there are a number of simultaneous
 processes working to build the new file. Each PHANTOM Copy knows how
 many total phantoms are working on the file and what sequence IT is
 within the total number of PHANTOMS. Each Phantom handles part of the
 SAVELIST. Each PHANTOM does the iteration below until the list is
 exhausted:
 
 1. QSELECT SAVEDLISTS listname000. (process 2 would use 001, process 3
 would use 002 etc)
 2. COPY FROM old.file TO new.file
 3. Increment the list counter from 000 by the number of Phantoms and go
 back to step one to process a new segment of the savelist.
 4. If a process cannot find the next savelist segment in SAVEDLISTS, it
 is done.
 
 This process is almost as fast as memresize. You can control what file
 systems are used for space reasons and you don't get scads of overxxx
 segments in your FILE.NAME directory (one of my dynamic files has 39 dat
 segments and just a over001). 
 
 So I don't use memresize for dynamic files anymore. 
 
 -Original Message-
 From: u2-users-boun...@listserver.u2ug.org
 [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Jonathan
 Leckie
 Sent: Wednesday, April 25, 2012 9:36 AM
 To: u2-users@listserver.u2ug.org
 Subject: [U2] Huge Dynamic Unidata file
 
 I have a very large  file that I don't have enough free space to
 memresize, however howabout I create  new dynamic (temporary) file and
 then copy all the records (in ECL) to the new  file and then (unix) copy
 the temporary file over the top of the  original.
 
 Does that seem like  a sensible idea?
 
 
 Regards
 Jonathan Leckie
 
 
 
 
 **
 * This message has been scanned for viruses and dangerous content
 * and is believed to be clean.
 *
 
 * This email and any files transmitted with it are confidential and 
 * intended solely for the use of the individual or entity to whom they
 * are addressed.
 *
 * If you have received this email in error please notify us at Blairs,
 * details can be found on our website http://www.blairswindows.co.uk
 *
 * Name  Registered Office:
 *
 * Blairs Windows Limited
 * Registered office : 9 Baker Street, Greenock, PA15 4TU
 * Company No: SC393935, V.A.T. registration No: 108729111
 **
 ___
 U2-Users mailing list
 U2-Users@listserver.u2ug.org
 http://listserver.u2ug.org/mailman/listinfo/u2-users