Re: Segfault in Bash

2020-07-15 Thread Chet Ramey
On 7/14/20 12:02 PM, Ilkka Virta wrote:
> On 14.7. 16:08, Chet Ramey wrote:
>> On 7/14/20 6:32 AM, Jeffrey Walton wrote:
>>> ./audit-libs.sh: line 17: 22929 Segmentation faultĀ  (core dumped)
>>> $(echo "$file" | grep -E "*.so$")
>>
>> Bash is reporting that a process exited due to a seg fault, but it is
>> not necessarily a bash process.
> 
> As a suggestion: it might be useful if the error message showed the actual
> command that ran, after expansions. Here it shows the same command each
> time, and if only one of them crashed, you wouldn't immediately know which
> one it was. The un-expanded source line is in any case available in the
> script itself.

I understand the reasoning, but it's better to have the command reported as
dumping core easier to resolve back to what was present in the script. It
makes it easier to track the error.

If you want the expanded command, enable xtrace for the portion of the
script of interest while you're debugging the problem.


-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



Re: Segfault in Bash

2020-07-14 Thread Ilkka Virta

On 14.7. 16:08, Chet Ramey wrote:

On 7/14/20 6:32 AM, Jeffrey Walton wrote:

./audit-libs.sh: line 17: 22929 Segmentation fault  (core dumped)
$(echo "$file" | grep -E "*.so$")


Bash is reporting that a process exited due to a seg fault, but it is
not necessarily a bash process.


As a suggestion: it might be useful if the error message showed the 
actual command that ran, after expansions. Here it shows the same 
command each time, and if only one of them crashed, you wouldn't 
immediately know which one it was. The un-expanded source line is in any 
case available in the script itself.


The message also seems to be much briefer for an interactive shell or a 
-c script. At least the latter ones might also benefit from the longer 
error message.


--
Ilkka Virta / itvi...@iki.fi



Re: Segfault in Bash

2020-07-14 Thread Ilkka Virta

On 14.7. 13:32, Jeffrey Walton wrote:

Hi Everyone,

I'm working on a script to find all shared objects in a directory. A
filename should match the RE '*.so$'. I thought I would pipe it to
grep:



IFS="" find "$dir" -name '*.so' -print | while read -r file
do
 if ! $(echo "$file" | grep -E "*.so$"); then continue; fi
 echo "library: $file"

done


Are you trying to find the .so files, or run them for some tests? 
Because it looks to me that you're running whatever that command 
substitution outputs, and not all dynamic libraries are made for that.



--
Ilkka Virta / itvi...@iki.fi



Re: Segfault in Bash

2020-07-14 Thread Chet Ramey
On 7/14/20 6:32 AM, Jeffrey Walton wrote:
> Hi Everyone,
> 
> I'm working on a script to find all shared objects in a directory. A
> filename should match the RE '*.so$'. I thought I would pipe it to
> grep:
> 
> $ ./audit-libs.sh /home/jwalton/tmp/ok2delete/lib
> ./audit-libs.sh: line 17: 22929 Segmentation fault  (core dumped)
> $(echo "$file" | grep -E "*.so$")
> ./audit-libs.sh: line 17: 22934 Segmentation fault  (core dumped)
> $(echo "$file" | grep -E "*.so$")
> ./audit-libs.sh: line 17: 22939 Segmentation fault  (core dumped)
> $(echo "$file" | grep -E "*.so$")
> ...
> 
> My code is broken at the moment. I know I am the cause of Bash's
> crash. But I feel like Bash should not segfault.

Bash is reporting that a process exited due to a seg fault, but it is
not necessarily a bash process.

Since the message is reporting a core dump, a backtrace from that would
tell you what's faulting.


-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



Re: Segfault in Bash

2020-07-14 Thread Greg Wooledge
> > IFS="" find "$dir" -name '*.so' -print | while read -r file
> > do
> > if ! $(echo "$file" | grep -E "*.so$"); then continue; fi
> > echo "library: $file"
> > 
> > done

Also, I forgot to point out: your "if" line is executing each of
the shared libraries that you find.  Every one of them matches the
grep check, and since you enclosed the check in a command substitution,
the output of grep (which is the pathname) is *executed* as a command.

That's probably where your segfault is happening.

Once you remove this completely unnecessary and incorrectly written
check, the segfaults from running random shared library files as
commands will stop happening.



Re: Segfault in Bash

2020-07-14 Thread Greg Wooledge
On Tue, Jul 14, 2020 at 06:32:44AM -0400, Jeffrey Walton wrote:
> $ ./audit-libs.sh /home/jwalton/tmp/ok2delete/lib
> ./audit-libs.sh: line 17: 22929 Segmentation fault  (core dumped)
> $(echo "$file" | grep -E "*.so$")

This grep regular expression is not valid.  The * symbol in a regular
expression means "0 or more of the previous thing", so you can never
begin a regular expression with a * character.

Also, the . character in a regex means "any one character", not a
literal dot.

If for some reason you actually wanted to grep for lines that end
with the string ".so", it would be:

grep '\.so$'

However, the *use* of grep here is also incorrect.

> My code is broken at the moment. I know I am the cause of Bash's
> crash. But I feel like Bash should not segfault.
> 
> IFS="" find "$dir" -name '*.so' -print | while read -r file
> do
> if ! $(echo "$file" | grep -E "*.so$"); then continue; fi
> echo "library: $file"
> 
> done

I don't even understand what the grep is supposed to be *doing* here.
You already know that each file processed by that check ends with .so
because of the find command that you used.  You could simply remove
the "if" line altogether.

IFS isn't actually doing anything, either.  It only applies to the
find command, not the other half of the pipeline.  And of course, find
(an external command) will just ignore it.

What you really want is simply:

find "$dir" -type f -name '*.so' -exec printf 'library: %s\n' {} +

If your code is "just an example" (bashphorism 9), and you actually
wanted to do MORE than print each filename, and thus you really did
want a bash loop to process each file within the script, then you
need to change a few things.  Your existing loop will blow up on any
filenames containing newlines.  Also, because you used a pipeline,
the loop runs in a subshell, so you can't set any variables and have
them survive -- this may or may not be a problem, and we can't know
because we don't know what the actual goal of the script is.

A proper while loop to process the pathnames emitted by grep would look
something like:

while IFS= read -r -d '' f; do
  : secret stuff here
done < <(find "$dir" -type f -name '*.so' -print0)



Re: segfault w/bash-4.4-beta2 and assigning empty $*

2016-08-11 Thread Mike Frysinger
On 11 Aug 2016 08:32, Chet Ramey wrote:
> On 8/11/16 8:29 AM, Mike Frysinger wrote:
> > simple code to reproduce:
> > bash -c 'v=$*'
> 
> http://lists.gnu.org/archive/html/bug-bash/2016-07/msg00066.html

thanks ... still catching up after vacation and hadn't made it that far yet ;)
-mike


signature.asc
Description: Digital signature


Re: segfault w/bash-4.4-beta2 and assigning empty $*

2016-08-11 Thread Chet Ramey
On 8/11/16 8:29 AM, Mike Frysinger wrote:
> simple code to reproduce:
>   bash -c 'v=$*'

http://lists.gnu.org/archive/html/bug-bash/2016-07/msg00066.html

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/



signature.asc
Description: OpenPGP digital signature


Re: SEGFAULT if bash script make source for itself

2014-09-05 Thread bogun.dmitriy
2014-09-03 7:31 GMT-07:00 Chet Ramey chet.ra...@case.edu:

 On 8/28/14, 2:02 PM, bogun.dmit...@gmail.com wrote:
  IMHO any user action should not lead to SIGSEGV! I am not objecting
 against
  recursive sourse itself. But when I got SIGSEGV from bash, I have no
  idea why this is happened. I have made recursive sourse by mistake and
  spend a lot of time looking up what exactly lead to SIGSEGV.
 
  Put a configurable limit on the deep of recursive source. There is almost
  no variant for legal usage of recursive source on deep... 1 for
  example. If someone need such recursion deep, he alway can raise limit or
  turn it off by setting it to 0.

 This is more or less the way I am leaning.  In the next version of bash, it
 will be possible to set a limit on the number of recursive source/. or eval
 calls at compile time.  This will be accomplished by changing a define in
 config-top.h.  There will be no limit enabled by default.

 Why define? Why not variable like FUNCNEST for functions? Most part of
this mailing list tell me about inadmissibility any limits in gnu soft...
And SIGSEVG in interpreter is only end user problem. An here is limit on
compile time. :)


 Chet
 --
 ``The lyf so short, the craft so long to lerne.'' - Chaucer
  ``Ars longa, vita brevis'' - Hippocrates
 Chet Ramey, ITS, CWRUc...@case.edu
 http://cnswww.cns.cwru.edu/~chet/



Re: SEGFAULT if bash script make source for itself

2014-09-05 Thread Chet Ramey
On 9/5/14, 2:57 AM, bogun.dmit...@gmail.com wrote:

 This is more or less the way I am leaning.  In the next version of bash, 
 it
 will be possible to set a limit on the number of recursive source/. or 
 eval
 calls at compile time.  This will be accomplished by changing a define in
 config-top.h.  There will be no limit enabled by default.
 
 Why define? Why not variable like FUNCNEST for functions? Most part of
 this mailing list tell me about inadmissibility any limits in gnu soft...
 And SIGSEVG in interpreter is only end user problem. An here is limit on
 compile time. :)

I don't particularly like the proliferation of variables for this purpose.
The presence of a define that a user can enable at compile time (which is
not defined by default) is consistent with the philosophy that GNU
software should have no builtin limits.  That philosophy does not say that
a user cannot impose such a limit on himself.

Any programming language can be made to produce errors such as the SIGSEGV
you saw if a programmer is willing to put in the effort.  Systems have
limits.

Chet
-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/



Re: SEGFAULT if bash script make source for itself

2014-09-03 Thread Chet Ramey
On 8/28/14, 2:02 PM, bogun.dmit...@gmail.com wrote:
 IMHO any user action should not lead to SIGSEGV! I am not objecting against
 recursive sourse itself. But when I got SIGSEGV from bash, I have no
 idea why this is happened. I have made recursive sourse by mistake and
 spend a lot of time looking up what exactly lead to SIGSEGV.
 
 Put a configurable limit on the deep of recursive source. There is almost
 no variant for legal usage of recursive source on deep... 1 for
 example. If someone need such recursion deep, he alway can raise limit or
 turn it off by setting it to 0.

This is more or less the way I am leaning.  In the next version of bash, it
will be possible to set a limit on the number of recursive source/. or eval
calls at compile time.  This will be accomplished by changing a define in
config-top.h.  There will be no limit enabled by default.

Chet
-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/



Re: SEGFAULT if bash script make source for itself

2014-08-28 Thread Chris Down
I really don't understand -- why is this unexpected? It's exactly what 
I'd expect to happen if you try to do something like that. It should not 
be disallowed to source yourself, that prevents people from doing things 
when *sensibly* sourcing their own script.




Re: SEGFAULT if bash script make source for itself

2014-08-28 Thread Steve Simmons
On Aug 28, 2014, at 12:37 PM, Chris Down ch...@chrisdown.name wrote:

 I really don't understand -- why is this unexpected? It's exactly what I'd 
 expect to happen if you try to do something like that. It should not be 
 disallowed to source yourself, that prevents people from doing things when 
 *sensibly* sourcing their own script.

Agree. It's perfectly valid for a sourced script to check for an error 
condition, fix the error, then re-source itself. Or for scripts to re-source 
each other, each time with new parameters. Recursion is a win.



Re: SEGFAULT if bash script make source for itself

2014-08-28 Thread Eric Blake
On 08/27/2014 07:07 PM, bogun.dmit...@gmail.com wrote:
 
 Expected result:
 Block source for files already listed in ${BASH_SOURCE}. Perhaps this
 behavior and changed behavior should be switched by option in set
 command.

No.  Recursive sourcing is useful, don't prohibit it artificially.
Detecting which cases of user input would cause stack overflow is
equivalent to solving the Halting Problem, which is not practical.  So
our choices are to either cripple the user unnecessarily, or do a better
job in at least detecting after the fact when the user did something dumb.

 
 Or at least suitable error message if recursive source loop detected.

GNU libsigsegv is a library which provides the means for applications to
give a NICE error message when user input causes stack overflow.  For
example, both m4 and gawk use it so that a stack recursion exits
gracefully rather than with a segfault and core dump (after all, it's
the user's fault for putting in bad input, not a bug in the program).
Maybe it's worth investigating if bash could link with it?

-- 
Eric Blake   eblake redhat com+1-919-301-3266
Libvirt virtualization library http://libvirt.org



signature.asc
Description: OpenPGP digital signature


Re: SEGFAULT if bash script make source for itself

2014-08-28 Thread bogun.dmitriy
IMHO any user action should not lead to SIGSEGV! I am not objecting against
recursive sourse itself. But when I got SIGSEGV from bash, I have no
idea why this is happened. I have made recursive sourse by mistake and
spend a lot of time looking up what exactly lead to SIGSEGV.

Put a configurable limit on the deep of recursive source. There is almost
no variant for legal usage of recursive source on deep... 1 for
example. If someone need such recursion deep, he alway can raise limit or
turn it off by setting it to 0.

PS Perhaps recursive function execution need limit too.


2014-08-28 9:48 GMT-07:00 Eric Blake ebl...@redhat.com:

 On 08/27/2014 07:07 PM, bogun.dmit...@gmail.com wrote:
 
  Expected result:
  Block source for files already listed in ${BASH_SOURCE}. Perhaps this
  behavior and changed behavior should be switched by option in set
  command.

 No.  Recursive sourcing is useful, don't prohibit it artificially.
 Detecting which cases of user input would cause stack overflow is
 equivalent to solving the Halting Problem, which is not practical.  So
 our choices are to either cripple the user unnecessarily, or do a better
 job in at least detecting after the fact when the user did something dumb.

 
  Or at least suitable error message if recursive source loop detected.

 GNU libsigsegv is a library which provides the means for applications to
 give a NICE error message when user input causes stack overflow.  For
 example, both m4 and gawk use it so that a stack recursion exits
 gracefully rather than with a segfault and core dump (after all, it's
 the user's fault for putting in bad input, not a bug in the program).
 Maybe it's worth investigating if bash could link with it?

 --
 Eric Blake   eblake redhat com+1-919-301-3266
 Libvirt virtualization library http://libvirt.org




Re: SEGFAULT if bash script make source for itself

2014-08-28 Thread Eric Blake
On 08/28/2014 12:02 PM, bogun.dmit...@gmail.com wrote:
 IMHO any user action should not lead to SIGSEGV! I am not objecting against
 recursive sourse itself. But when I got SIGSEGV from bash, I have no
 idea why this is happened. I have made recursive sourse by mistake and
 spend a lot of time looking up what exactly lead to SIGSEGV.

SIGSEGV is what happens on stack overflow, unless you integrate a stack
overflow detector like GNU libsigsegv with your sources to catch the
segv and replace it with a nice error message.

As to whether or not user code should be able to cause stack overflow,
we can't prevent it.  Reliably preventing stack overflow would be
equivalent to solving the Halting Problem, which we cannot do; so all we
can do is detect when it happens.

 
 Put a configurable limit on the deep of recursive source. There is almost
 no variant for legal usage of recursive source on deep... 1 for
 example. If someone need such recursion deep, he alway can raise limit or
 turn it off by setting it to 0.

The GNU Coding Standards state that GNU software cannot have arbitrary
limits by default.  Any limit we pick, other than unlimited (your
proposal of turning it to 0), would be an arbitrary limit for someone
who has a machine with more memory and a larger stack.  So 0 is the only
sane default, but that's no different than what we already have.

-- 
Eric Blake   eblake redhat com+1-919-301-3266
Libvirt virtualization library http://libvirt.org



signature.asc
Description: OpenPGP digital signature


Re: SEGFAULT if bash script make source for itself

2014-08-28 Thread Greg Wooledge
On Thu, Aug 28, 2014 at 11:49:02AM -0700, bogun.dmit...@gmail.com wrote:
 So why I should got SIGSEGV instead of nice, detailed error message in
 recursion? We can detect it?

You can't detect that it's going to happen.  You can only receive the
SIGSEGV *after* it happens.

We already have a configurable switch that would have prevented your
original issue:

  Functions may be recursive.  The FUNCNEST variable may be used to
  limit the depth of the function call stack and restrict the number of
  function invocations.  By default, no limit is imposed on the number
  of recursive calls.

Just export FUNCNEST=1000 somewhere in your dotfiles and you'll never
have this particular seg fault again (assuming all your scripts run in
an environment that inherits this).

The default of 0 is quite reasonable, as others have already explained.



Re: SEGFAULT if bash script make source for itself

2014-08-28 Thread Eric Blake
On 08/28/2014 12:49 PM, bogun.dmit...@gmail.com wrote:

 If follow this logic - we shoul try to catch incorrect user behaviour... we
 will got errors/signals from kernel.
 
 Simple situation:
 $ ((1/0))
 bash: ((: 1/0: division by 0 (error token is 0)
 
 Whey there is check on division by zero? We can predict this? - No. But we
 can detect it... and we out nice, detailed error message.

Actually, division by zero is fairly easy to check, and this is probably
a case where bash is checking for division by 0 up front rather than
handling SIGFPE after the fact.

 
 So why I should got SIGSEGV instead of nice, detailed error message in
 recursion? We can detect it?

GNU libsigsegv proves that it is possible to detect when SIGSEGV was
caused by stack overflow.  It can't help prevent stack overflow, and you
_don't_ want to penalize your code by adding checking code into the
common case (if I'm about to overflow, error out instead), but leave
stack overflow as the exceptional case (if I've already overflowed and
received SIGSEGV, convert it into a nice error message to the user
before exiting cleanly, instead of the default behavior of dumping
core).  But someone would have to write the patch for bash to link
against libsigsegv.

-- 
Eric Blake   eblake redhat com+1-919-301-3266
Libvirt virtualization library http://libvirt.org



signature.asc
Description: OpenPGP digital signature


Re: SEGFAULT if bash script make source for itself

2014-08-28 Thread bogun.dmitriy
2014-08-28 11:54 GMT-07:00 Greg Wooledge wool...@eeg.ccf.org:

 On Thu, Aug 28, 2014 at 11:49:02AM -0700, bogun.dmit...@gmail.com wrote:
  So why I should got SIGSEGV instead of nice, detailed error message in
  recursion? We can detect it?

 You can't detect that it's going to happen.  You can only receive the
 SIGSEGV *after* it happens.

 We already have a configurable switch that would have prevented your
 original issue:

   Functions may be recursive.  The FUNCNEST variable may be used to
   limit the depth of the function call stack and restrict the number of
   function invocations.  By default, no limit is imposed on the number
   of recursive calls.

 Just export FUNCNEST=1000 somewhere in your dotfiles and you'll never
 have this particular seg fault again (assuming all your scripts run in
 an environment that inherits this).

This is not true:
$ export FUNCNEST=5
$ bash b.sh
Segmentation fault
$ cat b.sh
#!/bin/bash

set -e
source $(dirname ${BASH_SOURCE[0]}/c.sh)



 The default of 0 is quite reasonable, as others have already explained.

Ok. Let be 0 by default. But we still need option/variable to limit
source recursion. Especially if we have such limit for function recursion
level.


Re: SEGFAULT if bash script make source for itself

2014-08-28 Thread Bob Proulx
bogun.dmit...@gmail.com wrote:
 Eric Blake wrote:
  bogun.dmit...@gmail.com wrote:
   IMHO any user action should not lead to SIGSEGV! I am not objecting 
   against
   recursive sourse itself. But when I got SIGSEGV from bash, I have no
   idea why this is happened. I have made recursive sourse by mistake and
   spend a lot of time looking up what exactly lead to SIGSEGV.

But you wrote the program that caused the SIGSEGV.  At that point you
are no longer just a user but are now a programmer.  Technically
speaking the SIGSEGV problem would be a bug in your bash script
program.  You as the programmer of that script have the responsibility
for it.

  SIGSEGV is what happens on stack overflow, unless you integrate a stack
  overflow detector like GNU libsigsegv with your sources to catch the
  segv and replace it with a nice error message.

 I know when and why program can get SIGSEGV.

Then you already know that it is a recursion problem that has run out
of stack space.  Any program that allows recursion might be programmed
in error.  If this is not suitable then using a programming language
that does not allow recursion might be the choice for you.  For
example the old FORTRAN did not allow recursion and yet it enjoyed a
lot of popularity at one time.

 So why I should got SIGSEGV instead of nice, detailed error message in
 recursion? We can detect it?

Because in general it is a hard problem to solve.  And it isn't always
about making it bigger.  It may simply be a very small environment
without enough stack to complete.  The program may completely fine if
there were enough stack.  It is hard to tell if the program is in an
infinite recursion or if it simply didn't have enough stack space to
complete and would complete if there were more.  All that can be said
is that it didn't have enough stack.

  The GNU Coding Standards state that GNU software cannot have arbitrary
  limits by default.  Any limit we pick, other than unlimited (your
  ...

 IBM produce new CPU - it solve infinite loop in 8 seconds.
 
 How bigger amount of memory save from infinite recursion? It lead to bigger
 delay before SIGSEGV and nothing else.

Haha.  Of course it is a hardware problem.  If we only had a machine
with an infinitely large stack then we would never run out of stack
space and could never SIGSEGV due to stack overflow.

Of course then the program would simply run forever since it would
continue to do exactly what it had been programmed to do.  Which is
one of the things that makes this so hard.  In order for an automated
detection the program must say that the program should not do what the
programmer told it to do.  That is where it runs into problems.
Similar to why auto-correction spell checkers force so many spelling
errors on humans.

  $ ulimit -a
 data seg size   (kbytes, -d) unlimited
 ...
 
 so... in real life we have a limits. Some of them turned off, but they
 exists and can be ajusted.

Those are explicit process limits in addition to the actual limits.
It is always easier to make them smaller.  But you can't make the
actual limits larger.

For example feel free to try to make use of that unlimited data set
size.  Let's assume you have a X sized memory machine.  Try to use a
thousand times X that amount of memory regardless of it being set to
unlimited and see how well things work.

 And if I have an option, which I can change to some suitable value for me
 and this can save me/show me good error message in case of infinite
 recursion - I will use it. Other can leave it in ifinite position. We can
 have 2 options - one set recursion level limit, other set action when this
 limit is reached - deny deeper recursion / print warning.

There is the sharp kitchen knife thought-problem.  Have you ever cut
yourself on a kitchen knife?  Of course most of us have at one time or
another.  Can you envision a way to make kitchen knives safer?  A way
to make it impossible for you to cut yourself on one?  Think about how
you would make a kitchen knife safer.  Give it a serious think.  Then
ask yourself this question.  Would you use such a knife that you
designed yourself?  The answer is invariable no.  There has always
been some inherent danger when using a kitchen knife.  We accept this
because the alternatives are all worse.

Bob



Re: SEGFAULT if bash script make source for itself

2014-08-28 Thread bogun.dmitriy
2014-08-28 12:08 GMT-07:00 Eric Blake ebl...@redhat.com:

 On 08/28/2014 12:49 PM, bogun.dmit...@gmail.com wrote:

  If follow this logic - we shoul try to catch incorrect user behaviour...
 we
  will got errors/signals from kernel.
 
  Simple situation:
  $ ((1/0))
  bash: ((: 1/0: division by 0 (error token is 0)
 
  Whey there is check on division by zero? We can predict this? - No. But
 we
  can detect it... and we out nice, detailed error message.

 Actually, division by zero is fairly easy to check, and this is probably
 a case where bash is checking for division by 0 up front rather than
 handling SIGFPE after the fact.

Is it so heavy to check length of $BASH_SOURCE array?

 So why I should got SIGSEGV instead of nice, detailed error message in
  recursion? We can detect it?

 GNU libsigsegv proves that it is possible to detect when SIGSEGV was
 caused by stack overflow.  It can't help prevent stack overflow, and you
 _don't_ want to penalize your code by adding checking code into the
 common case (if I'm about to overflow, error out instead), but leave
 stack overflow as the exceptional case (if I've already overflowed and
 received SIGSEGV, convert it into a nice error message to the user
 before exiting cleanly, instead of the default behavior of dumping
 core).  But someone would have to write the patch for bash to link
 against libsigsegv.

I undestand it. It better than getting SIGSEGV, but not a solution for this
issue. As I think.

--
 Eric Blake   eblake redhat com+1-919-301-3266
 Libvirt virtualization library http://libvirt.org




Re: SEGFAULT if bash script make source for itself

2014-08-28 Thread Eric Blake
On 08/28/2014 03:00 PM, bogun.dmit...@gmail.com wrote:
 Whey there is check on division by zero? We can predict this? - No. But
 we
 can detect it... and we out nice, detailed error message.

 Actually, division by zero is fairly easy to check, and this is probably
 a case where bash is checking for division by 0 up front rather than
 handling SIGFPE after the fact.

 Is it so heavy to check length of $BASH_SOURCE array?

Checking the length of $BASH_SOURCE array (or indeed, ANY check of some
counter compared to the current recursion depth) is only an
approximation.  It tells whether you are nearing an artificial limit.
It does NOT tell you if you will overflow the stack (it's possible to
set the variable too high, and still trigger a stack overflow; more
likely, if you set the variable too low, someone can come up with a
recursive program that _would_ have completed had it been granted full
access to the stack but now fails because your limit got in the way).

For _certain_ cases of programming, determining maximum stack usage is
computible (start at the leaves, figure out how much they allocate, then
work your way up the call stack).  But the moment you introduce
recursion into the mix, where the recursion is conditionally gated on
user input, it is _inherently impossible_ to compute the maximum stack
depth for all possible program execution flows, shy of actually
executing the program.  Bash, and many other scripting languages, are in
such a boat - by giving the end user the power to write a recursive
function, they are also giving the end user the power to exhaust an
unknowable stack depth.

As long as the interpreter implements user recursion by using recursive
functions itself, there is no way to say if I call your next function,
I will overflow the stack, so pre-emptively error out now, but keep my
interpreter running.  The _BEST_ we can do is detect that I just ran
out of stack, but in jumping to the SIGSEGV handler, I can't guarantee
whether I interrupted a malloc or any other locked code, therefore I
cannot safely use malloc or any other locking function between now and
calling _exit().

It is possible to write a class of programs that GUARANTEE that if stack
overflow happens, that it did not happen within any core function that
might hold a lock, and therefore the program can longjmp back to a safe
point, abort the overflowing operation, and carry on with life.  But it
is EXTREMELY TRICKY to do - you have to be absolutely vigilant that you
separate your code into two buckets - the set of code that might obtain
any lock, but is used non-recursively (and therefore you can compute the
maximum stack depth of that code), and the set of code that recurses,
but cannot obtain any lock without first checking that the current stack
depth plus the maximum depth of the locking code will still fit in the
stack.  With a program like that, you can then pre-emptively detect
stack overflow for the next call into non-recursive code without relying
on SIGSEGV (you'd still want the SIGSEGV handler for the recursive part,
but can now longjmp back to your non-recursive outer handler).  But it
is not practical, and would mean a complete rewrite of the bash source
code, and probably even a parallel stripped-down rewrite of glibc.

In many cases, it is also possible to convert recursive code into
iterative code; but usually, conversions like this involve trade-offs,
such as requiring heap storage to track progress between iterations
where the old code used the stack.  Again, doing such conversions to the
bash code base would mean a complete rewrite.  And such a conversion is
worthwhile only if everything doable in one leg of the recursion is
known up front - but bash is a scripting language and can't predict what
all user input code will want to do at each level of recursion, short of
executing the script.

 
 So why I should got SIGSEGV instead of nice, detailed error message in
 recursion? We can detect it?

 GNU libsigsegv proves that it is possible to detect when SIGSEGV was
 caused by stack overflow.  It can't help prevent stack overflow, and you
 _don't_ want to penalize your code by adding checking code into the
 common case (if I'm about to overflow, error out instead), but leave
 stack overflow as the exceptional case (if I've already overflowed and
 received SIGSEGV, convert it into a nice error message to the user
 before exiting cleanly, instead of the default behavior of dumping
 core).  But someone would have to write the patch for bash to link
 against libsigsegv.

 I undestand it. It better than getting SIGSEGV, but not a solution for this
 issue. As I think.

It's not clear what issue you think needs solving.

If you are trying to solve the issue of prevent all possible SIGSEGV
from stack overflow, the answer is it's impossible.

If you are trying to solve the issue of bash dumps core based on user
input, but I'd prefer a nice error message telling me my program is
buggy, the answer is write a patch to 

Re: SEGFAULT if bash script make source for itself

2014-08-28 Thread bogun.dmitriy
2014-08-28 13:59 GMT-07:00 Bob Proulx b...@proulx.com:

 bogun.dmit...@gmail.com wrote:
  Eric Blake wrote:
   bogun.dmit...@gmail.com wrote:
IMHO any user action should not lead to SIGSEGV! I am not objecting
 against
recursive sourse itself. But when I got SIGSEGV from bash, I
 have no
idea why this is happened. I have made recursive sourse by mistake
 and
spend a lot of time looking up what exactly lead to SIGSEGV.

 But you wrote the program that caused the SIGSEGV.  At that point you
 are no longer just a user but are now a programmer.  Technically
 speaking the SIGSEGV problem would be a bug in your bash script
 program.  You as the programmer of that script have the responsibility
 for it.

Any action in my script should lead to SIGSEGV in interpreter! If I write
program on some compilable language, for example C, compile it and got
SIGSEGV - this is my problem. But in this case, my program executed by
interpreter, and if interpreter fail, despite reasons, this is problem of
interpreter.

What you will say if gcc(cc) will down with SIGSEGV while compiling your
code? Is it problem of gcc or your code?



   SIGSEGV is what happens on stack overflow, unless you integrate a stack
   overflow detector like GNU libsigsegv with your sources to catch the
   segv and replace it with a nice error message.
 
  I know when and why program can get SIGSEGV.

 Then you already know that it is a recursion problem that has run out
 of stack space.  Any program that allows recursion might be programmed
 in error.  If this is not suitable then using a programming language
 that does not allow recursion might be the choice for you.  For
 example the old FORTRAN did not allow recursion and yet it enjoyed a
 lot of popularity at one time.

 Are you make fun of me?

I got an error in interpreter(we talk about bash if you forgot). I have
made a proposal, how it can be solved... And you say me to write on
something other. Hm... perhaps this is not so bad idea, if all bash
developer prefer to not fix errors in their code.


  So why I should got SIGSEGV instead of nice, detailed error message in
  recursion? We can detect it?

 Because in general it is a hard problem to solve.  And it isn't always
 about making it bigger.  It may simply be a very small environment
 without enough stack to complete.  The program may completely fine if
 there were enough stack.  It is hard to tell if the program is in an
 infinite recursion or if it simply didn't have enough stack space to
 complete and would complete if there were more.  All that can be said
 is that it didn't have enough stack.


There is already variable that limit function recurstion depth(FUNCNEST).
Why there should not be similar variable for source recursion depth?
Why we have a FUNCTEST limit, if this is not problem of interpreter, but
the problem of end prorgammer?

  The GNU Coding Standards state that GNU software cannot have arbitrary
   limits by default.  Any limit we pick, other than unlimited (your
   ...
 
  IBM produce new CPU - it solve infinite loop in 8 seconds.
 
  How bigger amount of memory save from infinite recursion? It lead to
 bigger
  delay before SIGSEGV and nothing else.

 Haha.  Of course it is a hardware problem.  If we only had a machine
 with an infinitely large stack then we would never run out of stack
 space and could never SIGSEGV due to stack overflow.

And what? User will wait forever? Until we got CPU that solve infinite
loops?
There is not so many cases that can use infinite recursion.

Of course then the program would simply run forever since it would
 continue to do exactly what it had been programmed to do.  Which is
 one of the things that makes this so hard.  In order for an automated
 detection the program must say that the program should not do what the
 programmer told it to do.  That is where it runs into problems.
 Similar to why auto-correction spell checkers force so many spelling
 errors on humans.


Why I can't/should n't set recursion limit? I don't want automatically
solving my errors. I just want to have some protection.



   $ ulimit -a
  data seg size   (kbytes, -d) unlimited
  ...
 
  so... in real life we have a limits. Some of them turned off, but they
  exists and can be ajusted.

 Those are explicit process limits in addition to the actual limits.
 It is always easier to make them smaller.  But you can't make the
 actual limits larger.

 For example feel free to try to make use of that unlimited data set
 size.  Let's assume you have a X sized memory machine.  Try to use a
 thousand times X that amount of memory regardless of it being set to
 unlimited and see how well things work.

again... We talk not about hardware limits!
If my application try to get memory bigger than installed in my PC, and
bigger that available swap space, it will be terminated by kernel.


  And if I have an option, which I can change to some suitable value for me
  and this can save me/show me good error 

Re: SEGFAULT if bash script make source for itself

2014-08-28 Thread Eric Blake
On 08/28/2014 03:50 PM, bogun.dmit...@gmail.com wrote:
 Any action in my script should lead to SIGSEGV in interpreter! If I write
 program on some compilable language, for example C, compile it and got
 SIGSEGV - this is my problem. But in this case, my program executed by
 interpreter, and if interpreter fail, despite reasons, this is problem of
 interpreter.

No, it is a problem of your buggy program.

 
 What you will say if gcc(cc) will down with SIGSEGV while compiling your
 code? Is it problem of gcc or your code?

If gcc segfaults because it implements #include via recursion, and I
wrote a recursion loop of #includes into my source, then I'd say the bug
was mine, not gcc's.  Just the same as if you write a recursion loop
into your bash program.

It's not the compiler's fault that input that requests recursion can
abuse the stack.  Rather, it is the fault of the input.

 To be short - you(community) don't want to add limit, because its default
 value shoul be infinite!

I'm not saying that a limit is a bad idea, just that a limit on by
default is a bad idea (it goes against the GNU Coding Standards of no
arbitrary limits).  The moment YOU change from the default of unlimited
to your chosen limit, it is no longer an arbitrary limitation of bash,
but a conscious choice on your part.  But as long as the limit defaults
to being off, it brings us back to the question of whether bash should
dump core when the stack overflows due to a buggy user input.  It's not
a bug in bash, but in the user program; and that's WHY libsigsegv exists
(to convert what would have been a core dump into a nice error message,
making it obvious that the bug was in the user input).

-- 
Eric Blake   eblake redhat com+1-919-301-3266
Libvirt virtualization library http://libvirt.org



signature.asc
Description: OpenPGP digital signature


Re: SEGFAULT if bash script make source for itself

2014-08-28 Thread bogun.dmitriy
2014-08-28 14:43 GMT-07:00 Eric Blake ebl...@redhat.com:

 On 08/28/2014 03:00 PM, bogun.dmit...@gmail.com wrote:
  Whey there is check on division by zero? We can predict this? - No. But
  we
  can detect it... and we out nice, detailed error message.
 
  Actually, division by zero is fairly easy to check, and this is probably
  a case where bash is checking for division by 0 up front rather than
  handling SIGFPE after the fact.
 
  Is it so heavy to check length of $BASH_SOURCE array?

 Checking the length of $BASH_SOURCE array (or indeed, ANY check of some
 counter compared to the current recursion depth) is only an
 approximation.  It tells whether you are nearing an artificial limit.
 It does NOT tell you if you will overflow the stack (it's possible to
 set the variable too high, and still trigger a stack overflow; more
 likely, if you set the variable too low, someone can come up with a
 recursive program that _would_ have completed had it been granted full
 access to the stack but now fails because your limit got in the way).

 For _certain_ cases of programming, determining maximum stack usage is
 computible (start at the leaves, figure out how much they allocate, then
 work your way up the call stack).  But the moment you introduce
 recursion into the mix, where the recursion is conditionally gated on
 user input, it is _inherently impossible_ to compute the maximum stack
 depth for all possible program execution flows, shy of actually
 executing the program.  Bash, and many other scripting languages, are in
 such a boat - by giving the end user the power to write a recursive
 function, they are also giving the end user the power to exhaust an
 unknowable stack depth.

 As long as the interpreter implements user recursion by using recursive
 functions itself, there is no way to say if I call your next function,
 I will overflow the stack, so pre-emptively error out now, but keep my
 interpreter running.  The _BEST_ we can do is detect that I just ran
 out of stack, but in jumping to the SIGSEGV handler, I can't guarantee
 whether I interrupted a malloc or any other locked code, therefore I
 cannot safely use malloc or any other locking function between now and
 calling _exit().

 It is possible to write a class of programs that GUARANTEE that if stack
 overflow happens, that it did not happen within any core function that
 might hold a lock, and therefore the program can longjmp back to a safe
 point, abort the overflowing operation, and carry on with life.  But it
 is EXTREMELY TRICKY to do - you have to be absolutely vigilant that you
 separate your code into two buckets - the set of code that might obtain
 any lock, but is used non-recursively (and therefore you can compute the
 maximum stack depth of that code), and the set of code that recurses,
 but cannot obtain any lock without first checking that the current stack
 depth plus the maximum depth of the locking code will still fit in the
 stack.  With a program like that, you can then pre-emptively detect
 stack overflow for the next call into non-recursive code without relying
 on SIGSEGV (you'd still want the SIGSEGV handler for the recursive part,
 but can now longjmp back to your non-recursive outer handler).  But it
 is not practical, and would mean a complete rewrite of the bash source
 code, and probably even a parallel stripped-down rewrite of glibc.

 In many cases, it is also possible to convert recursive code into
 iterative code; but usually, conversions like this involve trade-offs,
 such as requiring heap storage to track progress between iterations
 where the old code used the stack.  Again, doing such conversions to the
 bash code base would mean a complete rewrite.  And such a conversion is
 worthwhile only if everything doable in one leg of the recursion is
 known up front - but bash is a scripting language and can't predict what
 all user input code will want to do at each level of recursion, short of
 executing the script.

 
  So why I should got SIGSEGV instead of nice, detailed error message in
  recursion? We can detect it?
 
  GNU libsigsegv proves that it is possible to detect when SIGSEGV was
  caused by stack overflow.  It can't help prevent stack overflow, and you
  _don't_ want to penalize your code by adding checking code into the
  common case (if I'm about to overflow, error out instead), but leave
  stack overflow as the exceptional case (if I've already overflowed and
  received SIGSEGV, convert it into a nice error message to the user
  before exiting cleanly, instead of the default behavior of dumping
  core).  But someone would have to write the patch for bash to link
  against libsigsegv.
 
  I undestand it. It better than getting SIGSEGV, but not a solution for
 this
  issue. As I think.

 It's not clear what issue you think needs solving.

 If you are trying to solve the issue of prevent all possible SIGSEGV
 from stack overflow, the answer is it's impossible.

 If you are trying to solve the issue 

Re: SEGFAULT if bash script make source for itself

2014-08-28 Thread bogun.dmitriy
2014-08-28 14:57 GMT-07:00 Eric Blake ebl...@redhat.com:

 On 08/28/2014 03:50 PM, bogun.dmit...@gmail.com wrote:
  Any action in my script should lead to SIGSEGV in interpreter! If I write
  program on some compilable language, for example C, compile it and got
  SIGSEGV - this is my problem. But in this case, my program executed by
  interpreter, and if interpreter fail, despite reasons, this is problem of
  interpreter.

 No, it is a problem of your buggy program.

I got your point. There is no way I can agree with it.



 
  What you will say if gcc(cc) will down with SIGSEGV while compiling your
  code? Is it problem of gcc or your code?

 If gcc segfaults because it implements #include via recursion, and I
 wrote a recursion loop of #includes into my source, then I'd say the bug
 was mine, not gcc's.  Just the same as if you write a recursion loop
 into your bash program.

 It's not the compiler's fault that input that requests recursion can
 abuse the stack.  Rather, it is the fault of the input.

Unhanded program termination - is not input problem, it is program problem.

Looks like gcc programmers not so dumb.

$ gcc a.c
In file included from a.h:1:0,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 from a.h:1,
 

Re: SEGFAULT if bash script make source for itself

2014-08-28 Thread Chris Down

bogun.dmit...@gmail.com writes:

Is it so heavy to check length of $BASH_SOURCE array?


Adding artificial barriers that don't actually solve the problem are 
heavy in terms of technical debt, even if not code.




Re: SEGFAULT if bash script make source for itself

2014-08-28 Thread bogun.dmitriy
2014-08-28 15:11 GMT-07:00 Chris Down ch...@chrisdown.name:

 bogun.dmit...@gmail.com writes:

 Is it so heavy to check length of $BASH_SOURCE array?


 Adding artificial barriers that don't actually solve the problem are
 heavy in terms of technical debt, even if not code.

Ok. Please remove FUNCNEST limit from code.


Re: SEGFAULT if bash script make source for itself

2014-08-28 Thread Eric Blake
On 08/28/2014 04:11 PM, bogun.dmit...@gmail.com wrote:
 If gcc segfaults because it implements #include via recursion, and I
 wrote a recursion loop of #includes into my source, then I'd say the bug
 was mine, not gcc's.  Just the same as if you write a recursion loop
 into your bash program.

 It's not the compiler's fault that input that requests recursion can
 abuse the stack.  Rather, it is the fault of the input.

 Unhanded program termination - is not input problem, it is program problem.
 
 Looks like gcc programmers not so dumb.
 
 $ gcc a.c
 In file included from a.h:1:0,
...
  from a.h:1,
  from a.c:1:
 a.h:1:15: error: #include nested too deeply

If you think the gcc programmers have imposed an artificial limit, raise
a bug report on their list.  Although the GNU Coding Standards request
no artificial limits, we cannot enforce what other programs choose to do
or not do by having a conversation on this list.

Also, it's not obvious whether gcc actually implements includes via
recursion, or whether it uses some other means - so even if they lifted
their limit for how much is too much include nesting, it's not readily
obvious whether that would turn into a stack overflow or a more generic
out of memory error.

Furthermore, the C language standard documents that compilers must allow
at least a certain amount of nesting, and declares that programs written
with more than that level of nesting are not strictly defined, and
therefore a compiler can do what it wants when it gets past that limit.
 Arguably, the fact that gcc errors out gracefully is a product of the
language definition that they are parsing - the language itself imposes
a design limit, and once you go beyond the guarantees of the language,
the compiler does not have to recurse forever.  But there is no such
comparable wording in POSIX for a minimum level of mandatory recursion
support in the shell.

 
 I should make patch and add libsigsegv?

Patches speak louder than words in open source projects.  If you are up
to taking on the task, go for it.  You can use GNU m4 and gawk as
examples of programs that have integrated in libsigsegv stack overflow
detection.

-- 
Eric Blake   eblake redhat com+1-919-301-3266
Libvirt virtualization library http://libvirt.org



signature.asc
Description: OpenPGP digital signature


Re: SEGFAULT if bash script make source for itself

2014-08-28 Thread bogun.dmitriy
2014-08-28 15:32 GMT-07:00 Eric Blake ebl...@redhat.com:

 On 08/28/2014 04:11 PM, bogun.dmit...@gmail.com wrote:
  If gcc segfaults because it implements #include via recursion, and I
  wrote a recursion loop of #includes into my source, then I'd say the bug
  was mine, not gcc's.  Just the same as if you write a recursion loop
  into your bash program.
 
  It's not the compiler's fault that input that requests recursion can
  abuse the stack.  Rather, it is the fault of the input.
 
  Unhanded program termination - is not input problem, it is program
 problem.
 
  Looks like gcc programmers not so dumb.
 
  $ gcc a.c
  In file included from a.h:1:0,
 ...
   from a.h:1,
   from a.c:1:
  a.h:1:15: error: #include nested too deeply

 If you think the gcc programmers have imposed an artificial limit, raise
 a bug report on their list.  Although the GNU Coding Standards request
 no artificial limits, we cannot enforce what other programs choose to do
 or not do by having a conversation on this list.

 Also, it's not obvious whether gcc actually implements includes via
 recursion, or whether it uses some other means - so even if they lifted
 their limit for how much is too much include nesting, it's not readily
 obvious whether that would turn into a stack overflow or a more generic
 out of memory error.


And they regret to fixing them too?


 Furthermore, the C language standard documents that compilers must allow
 at least a certain amount of nesting, and declares that programs written
 with more than that level of nesting are not strictly defined, and
 therefore a compiler can do what it wants when it gets past that limit.
  Arguably, the fact that gcc errors out gracefully is a product of the
 language definition that they are parsing - the language itself imposes
 a design limit, and once you go beyond the guarantees of the language,
 the compiler does not have to recurse forever.  But there is no such
 comparable wording in POSIX for a minimum level of mandatory recursion
 support in the shell.

You don't want to hear end user.



 
  I should make patch and add libsigsegv?

 Patches speak louder than words in open source projects.  If you are up
 to taking on the task, go for it.  You can use GNU m4 and gawk as
 examples of programs that have integrated in libsigsegv stack overflow
 detection.


And what for this mailing list? Don't answer, this have no any sense any
more.


 --
 Eric Blake   eblake redhat com+1-919-301-3266
 Libvirt virtualization library http://libvirt.org




Re: SEGFAULT if bash script make source for itself

2014-08-28 Thread Chris Down

bogun.dmit...@gmail.com writes:

And what for this mailing list? Don't answer, this have no any sense any
more.


This mailing list is for reporting bugs. So far nobody thinks that what 
you reported is a bug, so you would essentially be making a feature 
request. If you want to prioritise that, it's your own prerogative, but 
there are far more important things to worry about than this.




Re: SEGFAULT if bash script make source for itself

2014-08-28 Thread bogun.dmitriy
2014-08-28 15:44 GMT-07:00 Chris Down ch...@chrisdown.name:

 bogun.dmit...@gmail.com writes:

 And what for this mailing list? Don't answer, this have no any sense any
 more.


 This mailing list is for reporting bugs. So far nobody thinks that what
 you reported is a bug, so you would essentially be making a feature
 request. If you want to prioritise that, it's your own prerogative, but
 there are far more important things to worry about than this.

O... you have more serious bugs, that you regret to fix. Perfect. :)

PS SIGSEGV in application  - this is bug. You can continue thinking that
this is feature and do nothing.


Re: SEGFAULT if bash script make source for itself

2014-08-28 Thread Chris Down

bogun.dmit...@gmail.com writes:

O... you have more serious bugs


Than fixing a segfault that occurs when the user is obviously doing 
something stupid? Sure.