You may want to check the ulimit for the user.   Shell down to Unix and perform 
a ulimit -f this will tell you if there is a max file size set on this specific 
user.

Dan Goble
Sr. Programmer Analyst
RATEX Business Solutions, Inc.

-----Original Message-----
From: [email protected] 
[mailto:[email protected]] On Behalf Of Symeon Breen
Sent: Monday, July 06, 2009 6:01 AM
To: 'U2 Users List'
Subject: Re: [U2] 2gig limit sticking into bash shell

Thanks for the pointers,

Sorry i mislead you a bit - when i tried this myself it does indeed work if
you come out of udt and run it, it is only if you shell out from within udt
that it fails.
The set command before and while in udt shows quite a few diffs, these
mainly look to be udt config params tho :-
diff set.out set2.out
2c2,9
< BASH=/bin/bash
---
> AIMG_BUFSZ=102400
> AIMG_FLUSH_BLKS=2
> AIMG_MIN_BLKS=10
> ARCHIVE_TO_TAPE=0
> ARCH_FLAG=0
> ARCH_WRITE_SZ=0
> AVG_TUPLE_LEN=4
> BASH=/bin/sh
8a16,20
> BGINPUTTIMEOUT=0
> BIMG_BUFSZ=102400
> BIMG_FLUSH_BLKS=2
> BIMG_MIN_BLKS=10
> BPF_NFILES=80
10c22,24
< COLORS=/etc/DIR_COLORS.xterm
---
> CENTURY_PIVOT=1930
> CHECK_HOLD_EXIST=0
> CHKPNT_TIME=300
11a26,27
> COMPACTOR_POLICY=1
> CONVERT_EURO=0
12a29
> EFS_LCKTIME=0
14a32,34
> EXPBLKSIZE=16
> FCNTL_ON=0
> GLM_MEM_SEGSZ=4194304
15a36,37
> GRPCMT_TIME=5
> GRP_FREE_BLK=5
23c45,46
< IFS=$' \t\n'
---
> IFS='
> '
24a48,49
> JRNL_MAX_FILES=400
> JRNL_MAX_PROCS=1
25a51,52
> KEYDATA_MERGE_LOAD=40
> KEYDATA_SPLIT_LOAD=95
26a54,55
> LB_FLAG=1
> LCT_NUM=8
29a59
> LOCKFIFO=0
34a65,102
> MAX_CAPT_LEVEL=2
> MAX_DSFILES=1000
> MAX_FLENGTH=1073741824
> MAX_LRF_FILESIZE=134217728
> MAX_NEXT_HOLD_DIGITS=4
> MAX_OBJ_SIZE=307200
> MAX_OPEN_FILE=500
> MAX_OPEN_OSF=100
> MAX_OPEN_SEQF=150
> MAX_REP_DISTRIB=1
> MAX_REP_SHMSZ=33554432
> MAX_RETN_LEVEL=2
> MERGE_LOAD=40
> MGLM_BUCKET_SIZE=50
> MIN_MEMORY_TEMP=64
> NFA_CONVERT_CHAR=0
> NFILES=1019
> NSEM_PSET=8
> NULL_FLAG=0
> NUSERS=40
> N_AFT=200
> N_AFT_BUCKET=101
> N_AFT_MLF_BUCKET=23
> N_AFT_SECTION=1
> N_AIMG=2
> N_ARCH=2
> N_BIG=233
> N_BIMG=2
> N_FILESYS=200
> N_GLM_GLOBAL_BUCKET=101
> N_GLM_SELF_BUCKET=23
> N_PARTFILE=500
> N_PGQ=10
> N_PUT=8192
> N_REP_OPEN_FILE=8
> N_SYNC=0
> N_TMAFT_BUCKET=19
> N_TMQ=10
39a108
> PART_TBL=/usr/ud71/parttbl
41,44c110,113
< PIPESTATUS=([0]="0")
< PPID=7616
< PROMPT_COMMAND='echo -ne
"\033]0;${us...@${hostname%%.*}:${PWD/#$HOME/~}\007"'
< PS1='[...@\h \W]\$ '
---
> PIPESTATUS=([0]="0" [1]="2")
> POSIXLY_CORRECT=y
> PPID=10236
> PS1='\s-\v\$ '
47c116,122
< PWD=/home/symeon
---
> PWD=/usr/ud/accounts/cust855
> REP_FLAG=0
> REP_LOG_PATH=/usr/ud71/replog
> SBCS_SHM_SIZE=1048576
> SB_FLAG=0
> SETINDEX_BUFFER_KEYS=0
> SETINDEX_VALIDATE_KEY=0
49,50c124,140
<
SHELLOPTS=braceexpand:emacs:hashall:histexpand:history:interactive-comments:
monitor
< SHLVL=1
---
>
SHELLOPTS=braceexpand:emacs:hashall:histexpand:history:interactive-comments:
monitor:posix
> SHLVL=3
> SHM_ATT_ADD=0
> SHM_FIL_CNT=2048
> SHM_FREEPCT=25
> SHM_GNPAGES=32
> SHM_GNTBLS=40
> SHM_GPAGESZ=131072
> SHM_LBA=4096
> SHM_LCINENTS=100
> SHM_LMINENTS=32
> SHM_LPAGESZ=4096
> SHM_LPINENTS=10
> SHM_MAX_SIZE=33554432
> SHM_MIN_NATT=4
> SHM_NFREES=1
> SPLIT_LOAD=60
52,55c142,151
< SSH_CLIENT='::ffff:87.115.26.230 3539 22'
< SSH_CONNECTION='::ffff:87.115.26.230 3539 ::ffff:78.109.174.173 22'
< SSH_TTY=/dev/pts/6
<
SUPPORTED=zh_CN.UTF-8:zh_CN:zh:fr_FR.UTF-8:fr_FR:fr:de_DE.UTF-8:de_DE:de:ja_
JP.UTF-8:ja_JP:ja:es_ES.UTF-8:es_ES:es:en_US.UTF-8:en_US:en
---
> SSH_CLIENT='::ffff:87.115.26.230 4408 22'
> SSH_CONNECTION='::ffff:87.115.26.230 4408 ::ffff:78.109.174.173 22'
> SSH_TTY=/dev/pts/7
> STATIC_GROWTH_WARN_INTERVAL=300
> STATIC_GROWTH_WARN_SIZE=1610612736
> STATIC_GROWTH_WARN_TABLE_SIZE=256
> SYNC_TIME=0
> SYSTEM_EURO=164
> SYS_PV=3
> TCA_SIZE=128
56a153,157
> TERM_EURO=164
> TMP=/tmp/
> TOGGLE_NAP_TIME=31
> TSTIMEOUT=60
> UDR_CONVERT_CHAR=1
58a160
> UDT_LANGGRP=255/192/129
59a162
> UPL_LOGGING=0
61,69c164,167
< _=AD
< psc ()
< {
<     ps --cols=1000 --sort='-%cpu,uid,pgid,ppid,pid' -e -o
user,pid,ppid,pgid,stime,stat,wchan,time,pcpu,pmem,vsz,rss,sz,args | sed
's/^/ /' | less
< }
< psm ()
< {
<     ps --cols=1000 --sort='-vsz,uid,pgid,ppid,pid' -e -o
user,pid,ppid,pgid,stime,stat,wchan,time,pcpu,pmem,vsz,rss,sz,args | sed
's/^/ /' | less
< }
---
> VARMEM_PCT=50
> WRITE_TO_CONSOLE=0
> ZERO_CHAR=131
> _=

LD_LIBRARY_PATH and PATH are the same all the time.
Running unzip with the full path is the same, and whereis unzip only reports
the one path of /usr/bin/unzip

Also the at command works ! ?

I can work with the at command and make it write a completed message at the
end and poll for this from the udt process.   A strange one tho .....




Symeon.

-----Original Message-----
From: [email protected]
[mailto:[email protected]] On Behalf Of
[email protected]
Sent: 03 July 2009 16:57
To: [email protected]
Subject: Re: [U2] 2gig limit sticking into bash shell


 That is an interesting one!? So the file system, kernel, version of zip,
etc are all OK for the > 2 gig "32-bit" limit, but once you go into 32 bit
udt and back out (or from inside) you have the issue.? I am wondering if
your library path has changed, or the PATH var itself.? Does anything look
different in the environment after launching udt and exiting?? I am
wondering if you save the environment and check after, something like:

$ set >/tmp/envbefore
$ udt?? (then exit out)
$ set >/tmp/envafter
$ diff /tmp/envbefore /tmp/envafter


 
(Particularly interested in LD_LIBRARY_PATH)



 If you run zip with the full path name does it work?? Maybe a different
version of zip with the 2gb limit is being used.

If you launch a new shell, then run udt, and exit udt and then exit that
shell so you are back to the original does the unzip work?

If you run it as a batch job with "at" does it work OK?? Maybe submitting
the job and monitoring from UniData to see if the unzip is finished could
work?

I know Debian Linux solved the 32/64 bit issue by installing parallel libs
so 32 and 64 libs and apps happily co-exist.? I don't know how Red Hat
handles it as I have stayed with the other flavors out of lazy loyalty
(usually running UniData under Slackware Linux actually.)? 

I am interested in this because of my SQLizer -- it is a application that
takes UniData/UniVerse tables and keeps them synchronized with mirrors on
MySQL, SQL Server or Oracles tables -- will dump the contents into text
files for bulk loading on the other side.? During rebuilds the entire U2
table contents are dumped, and so far clients have not hit a 2gb issue but I
am thinking about what to do in the event that happens.? I am considering a
multiple file approach, splitting the file into 2gb chunks. 

If you wanted to it might be possible to unzip and pipe to a short perl
script that does the splitting for you.

Good luck!

Steve...
--
Steve Kneizys
ERPData, LLC
?

-----Original Message-----
From: Symeon Breen <[email protected]>
To: 'U2 Users List' <[email protected]>
Sent: Fri, Jul 3, 2009 7:07 am
Subject: [U2] 2gig limit sticking into bash shell










Hi, Redhat linux ES 3 64bit with udt 7.1 32 bit -  i have a zip file that
contains a 12 gig csv in it.  I can unzip this fine from bash. However if i
go into udt then either shell out, or quit from udt and try the unzip
command it stops when the extract gets to 2gig. I understand the fork-exec
mechanism of *nix, so i wondered is there a particular environment variable
or something else that i can change in the new shell so that my unzip
command will work !

 

 

 

Thanks.

Symeon.

 

 

 

_______________________________________________
U2-Users mailing list
[email protected]
http://listserver.u2ug.org/mailman/listinfo/u2-users



 

_______________________________________________
U2-Users mailing list
[email protected]
http://listserver.u2ug.org/mailman/listinfo/u2-users

_______________________________________________
U2-Users mailing list
[email protected]
http://listserver.u2ug.org/mailman/listinfo/u2-users
_______________________________________________
U2-Users mailing list
[email protected]
http://listserver.u2ug.org/mailman/listinfo/u2-users

Reply via email to