Re: c/unix q

2013-10-25 Thread E.S. Rosenberg
2013/6/6 Erez D erez0...@gmail.com:



 On Tue, Jun 4, 2013 at 6:09 PM, Shachar Shemesh shac...@shemesh.biz wrote:

 On 04/06/13 15:28, Erez D wrote:

 thanks,

 so i guess if i use unidirectional connection, and the reader does not
 expect to get an EOF()
 thank i'm safe.

 Why are you so keen on doing it wrong?

 No, you are not safe. If the child process dies because of a segmentation
 fault (or whatever), the parent will notice this through the EOF received (I
 am assuming here, since you couldn't be bothered with closing a file
 descriptor, that you did not install a SIGCHLD handler to monitor for this
 possibility). This means that should one process die unexpectedly, the other
 will hang forever.

 it's not a matter of being bothered. closing a file has it's implications

 1.  close the file for one thread closes for all
thread and fork are 2 very different things, best practice for fork
('full' children, I think everyone understands fork() when you say
child) is to close, when using threads that is I believe not the case.
 2. what if i want later children using the same pipe, as in all childs write
 to same pipe read by parent...
so the children are all closing the read end and the parent only
closes write, where is the problem?


 Best practices are there for a reason, despite what others here might have
 you think.

 Shachar



 ___
 Linux-il mailing list
 Linux-il@cs.huji.ac.il
 http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: c/unix q

2013-10-25 Thread E.S. Rosenberg
2013/6/6 Erez D erez0...@gmail.com:



 On Thu, Jun 6, 2013 at 5:04 PM, E.S. Rosenberg e...@g.jct.ac.il wrote:

 2013/6/6 Erez D erez0...@gmail.com:
 
 
 
  On Tue, Jun 4, 2013 at 6:09 PM, Shachar Shemesh shac...@shemesh.biz
  wrote:
 
  On 04/06/13 15:28, Erez D wrote:
 
  thanks,
 
  so i guess if i use unidirectional connection, and the reader does not
  expect to get an EOF()
  thank i'm safe.
 
  Why are you so keen on doing it wrong?
 
  No, you are not safe. If the child process dies because of a
  segmentation
  fault (or whatever), the parent will notice this through the EOF
  received (I
  am assuming here, since you couldn't be bothered with closing a file
  descriptor, that you did not install a SIGCHLD handler to monitor for
  this
  possibility). This means that should one process die unexpectedly, the
  other
  will hang forever.
 
  it's not a matter of being bothered. closing a file has it's
  implications
 
  1.  close the file for one thread closes for all
 thread and fork are 2 very different things, best practice for fork
 ('full' children, I think everyone understands fork() when you say
 child) is to close, when using threads that is I believe not the case.
  2. what if i want later children using the same pipe, as in all childs
  write
  to same pipe read by parent...
 so the children are all closing the read end and the parent only
 closes write, where is the problem?

 if the parent closes the write side, then new forked children have their
 write side already closed.

That's why we are able to check if we are a child or a parent with the
fork() function.

 
 
  Best practices are there for a reason, despite what others here might
  have
  you think.
 
  Shachar
 
 
 
  ___
  Linux-il mailing list
  Linux-il@cs.huji.ac.il
  http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
 



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: c/unix q

2013-06-06 Thread Erez D
On Tue, Jun 4, 2013 at 6:09 PM, Shachar Shemesh shac...@shemesh.biz wrote:

  On 04/06/13 15:28, Erez D wrote:

   thanks,

 so i guess if i use unidirectional connection, and the reader does not
 expect to get an EOF()
  thank i'm safe.

   Why are you so keen on doing it wrong?

 No, you are not safe. If the child process dies because of a segmentation
 fault (or whatever), the parent will notice this through the EOF received
 (I am assuming here, since you couldn't be bothered with closing a file
 descriptor, that you did not install a SIGCHLD handler to monitor for this
 possibility). This means that should one process die unexpectedly, the
 other will hang forever.

it's not a matter of being bothered. closing a file has it's implications

1.  close the file for one thread closes for all
2. what if i want later children using the same pipe, as in all childs
write to same pipe read by parent...


 Best practices are there for a reason, despite what others here might have
 you think.

 Shachar

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: c/unix q

2013-06-06 Thread Erez D
On Thu, Jun 6, 2013 at 5:04 PM, E.S. Rosenberg e...@g.jct.ac.il wrote:

 2013/6/6 Erez D erez0...@gmail.com:
 
 
 
  On Tue, Jun 4, 2013 at 6:09 PM, Shachar Shemesh shac...@shemesh.biz
 wrote:
 
  On 04/06/13 15:28, Erez D wrote:
 
  thanks,
 
  so i guess if i use unidirectional connection, and the reader does not
  expect to get an EOF()
  thank i'm safe.
 
  Why are you so keen on doing it wrong?
 
  No, you are not safe. If the child process dies because of a
 segmentation
  fault (or whatever), the parent will notice this through the EOF
 received (I
  am assuming here, since you couldn't be bothered with closing a file
  descriptor, that you did not install a SIGCHLD handler to monitor for
 this
  possibility). This means that should one process die unexpectedly, the
 other
  will hang forever.
 
  it's not a matter of being bothered. closing a file has it's implications
 
  1.  close the file for one thread closes for all
 thread and fork are 2 very different things, best practice for fork
 ('full' children, I think everyone understands fork() when you say
 child) is to close, when using threads that is I believe not the case.
  2. what if i want later children using the same pipe, as in all childs
 write
  to same pipe read by parent...
 so the children are all closing the read end and the parent only
 closes write, where is the problem?

if the parent closes the write side, then new forked children have their
write side already closed.

 
 
  Best practices are there for a reason, despite what others here might
 have
  you think.
 
  Shachar
 
 
 
  ___
  Linux-il mailing list
  Linux-il@cs.huji.ac.il
  http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
 

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: c/unix q

2013-06-06 Thread E.S. Rosenberg
re:all, forgot to change my from field.

2013/6/6 Erez D erez0...@gmail.com:



 On Thu, Jun 6, 2013 at 5:04 PM, E.S. Rosenberg e...@g.jct.ac.il wrote:

 2013/6/6 Erez D erez0...@gmail.com:
 
 
 
  On Tue, Jun 4, 2013 at 6:09 PM, Shachar Shemesh shac...@shemesh.biz
  wrote:
 
  On 04/06/13 15:28, Erez D wrote:
 
  thanks,
 
  so i guess if i use unidirectional connection, and the reader does not
  expect to get an EOF()
  thank i'm safe.
 
  Why are you so keen on doing it wrong?
 
  No, you are not safe. If the child process dies because of a
  segmentation
  fault (or whatever), the parent will notice this through the EOF
  received (I
  am assuming here, since you couldn't be bothered with closing a file
  descriptor, that you did not install a SIGCHLD handler to monitor for
  this
  possibility). This means that should one process die unexpectedly, the
  other
  will hang forever.
 
  it's not a matter of being bothered. closing a file has it's
  implications
 
  1.  close the file for one thread closes for all
 thread and fork are 2 very different things, best practice for fork
 ('full' children, I think everyone understands fork() when you say
 child) is to close, when using threads that is I believe not the case.
  2. what if i want later children using the same pipe, as in all childs
  write
  to same pipe read by parent...
 so the children are all closing the read end and the parent only
 closes write, where is the problem?

 if the parent closes the write side, then new forked children have their
 write side already closed.
That's why we are able to check if we are a child or a parent with the
fork() function.

 
 
  Best practices are there for a reason, despite what others here might
  have
  you think.
 
  Shachar
 
 
 
  ___
  Linux-il mailing list
  Linux-il@cs.huji.ac.il
  http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
 



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: c/unix q

2013-06-06 Thread Erez D
On Thu, Jun 6, 2013 at 5:29 PM, E.S. Rosenberg esr+linux...@g.jct.ac.ilwrote:

 re:all, forgot to change my from field.

 2013/6/6 Erez D erez0...@gmail.com:
 
 
 
  On Thu, Jun 6, 2013 at 5:04 PM, E.S. Rosenberg e...@g.jct.ac.il wrote:
 
  2013/6/6 Erez D erez0...@gmail.com:
  
  
  
   On Tue, Jun 4, 2013 at 6:09 PM, Shachar Shemesh shac...@shemesh.biz
   wrote:
  
   On 04/06/13 15:28, Erez D wrote:
  
   thanks,
  
   so i guess if i use unidirectional connection, and the reader does
 not
   expect to get an EOF()
   thank i'm safe.
  
   Why are you so keen on doing it wrong?
  
   No, you are not safe. If the child process dies because of a
   segmentation
   fault (or whatever), the parent will notice this through the EOF
   received (I
   am assuming here, since you couldn't be bothered with closing a file
   descriptor, that you did not install a SIGCHLD handler to monitor for
   this
   possibility). This means that should one process die unexpectedly,
 the
   other
   will hang forever.
  
   it's not a matter of being bothered. closing a file has it's
   implications
  
   1.  close the file for one thread closes for all
  thread and fork are 2 very different things, best practice for fork
  ('full' children, I think everyone understands fork() when you say
  child) is to close, when using threads that is I believe not the case.
   2. what if i want later children using the same pipe, as in all childs
   write
   to same pipe read by parent...
  so the children are all closing the read end and the parent only
  closes write, where is the problem?
 
  if the parent closes the write side, then new forked children have
 their
  write side already closed.
 That's why we are able to check if we are a child or a parent with the
 fork() function.

that doesn't help

sunday: parent creates a pipe
monday: parent forks for child 1. parent closes write. child 1 closes read.
child 1 now can write and parent can read.
tuesday: parent forks for child 2. child 2 can not write - pipe already
close by parent on monday.


  
  
  
   Best practices are there for a reason, despite what others here might
   have
   you think.
  
   Shachar
  
  
  
   ___
   Linux-il mailing list
   Linux-il@cs.huji.ac.il
   http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
  
 
 

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: c/unix q

2013-06-06 Thread E.S. Rosenberg
2013/6/6 Erez D erez0...@gmail.com:



 On Thu, Jun 6, 2013 at 5:29 PM, E.S. Rosenberg esr+linux...@g.jct.ac.il
 wrote:

 re:all, forgot to change my from field.

 2013/6/6 Erez D erez0...@gmail.com:
 
 
 
  On Thu, Jun 6, 2013 at 5:04 PM, E.S. Rosenberg e...@g.jct.ac.il wrote:
 
  2013/6/6 Erez D erez0...@gmail.com:
  
  
  
   On Tue, Jun 4, 2013 at 6:09 PM, Shachar Shemesh shac...@shemesh.biz
   wrote:
  
   On 04/06/13 15:28, Erez D wrote:
  
   thanks,
  
   so i guess if i use unidirectional connection, and the reader does
   not
   expect to get an EOF()
   thank i'm safe.
  
   Why are you so keen on doing it wrong?
  
   No, you are not safe. If the child process dies because of a
   segmentation
   fault (or whatever), the parent will notice this through the EOF
   received (I
   am assuming here, since you couldn't be bothered with closing a file
   descriptor, that you did not install a SIGCHLD handler to monitor
   for
   this
   possibility). This means that should one process die unexpectedly,
   the
   other
   will hang forever.
  
   it's not a matter of being bothered. closing a file has it's
   implications
  
   1.  close the file for one thread closes for all
  thread and fork are 2 very different things, best practice for fork
  ('full' children, I think everyone understands fork() when you say
  child) is to close, when using threads that is I believe not the case.
   2. what if i want later children using the same pipe, as in all
   childs
   write
   to same pipe read by parent...
  so the children are all closing the read end and the parent only
  closes write, where is the problem?
 
  if the parent closes the write side, then new forked children have
  their
  write side already closed.
 That's why we are able to check if we are a child or a parent with the
 fork() function.

 that doesn't help

 sunday: parent creates a pipe
 monday: parent forks for child 1. parent closes write. child 1 closes read.
 child 1 now can write and parent can read.
 tuesday: parent forks for child 2. child 2 can not write - pipe already
 close by parent on monday.
Seriously?
Just have the parent request another child from the child, or create a
child who's specific task is spawning the worker children, or any of
the other many solutions to this problem.


 
  
  
   Best practices are there for a reason, despite what others here
   might
   have
   you think.
  
   Shachar
  
  
  
   ___
   Linux-il mailing list
   Linux-il@cs.huji.ac.il
   http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
  
 
 



___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: c/unix q

2013-06-06 Thread Erez D
On Thu, Jun 6, 2013 at 5:45 PM, E.S. Rosenberg esr+linux...@g.jct.ac.ilwrote:

 2013/6/6 Erez D erez0...@gmail.com:
 
 
 
  On Thu, Jun 6, 2013 at 5:29 PM, E.S. Rosenberg esr+linux...@g.jct.ac.il
 
  wrote:
 
  re:all, forgot to change my from field.
 
  2013/6/6 Erez D erez0...@gmail.com:
  
  
  
   On Thu, Jun 6, 2013 at 5:04 PM, E.S. Rosenberg e...@g.jct.ac.il
 wrote:
  
   2013/6/6 Erez D erez0...@gmail.com:
   
   
   
On Tue, Jun 4, 2013 at 6:09 PM, Shachar Shemesh 
 shac...@shemesh.biz
wrote:
   
On 04/06/13 15:28, Erez D wrote:
   
thanks,
   
so i guess if i use unidirectional connection, and the reader does
not
expect to get an EOF()
thank i'm safe.
   
Why are you so keen on doing it wrong?
   
No, you are not safe. If the child process dies because of a
segmentation
fault (or whatever), the parent will notice this through the EOF
received (I
am assuming here, since you couldn't be bothered with closing a
 file
descriptor, that you did not install a SIGCHLD handler to monitor
for
this
possibility). This means that should one process die unexpectedly,
the
other
will hang forever.
   
it's not a matter of being bothered. closing a file has it's
implications
   
1.  close the file for one thread closes for all
   thread and fork are 2 very different things, best practice for fork
   ('full' children, I think everyone understands fork() when you say
   child) is to close, when using threads that is I believe not the
 case.
2. what if i want later children using the same pipe, as in all
childs
write
to same pipe read by parent...
   so the children are all closing the read end and the parent only
   closes write, where is the problem?
  
   if the parent closes the write side, then new forked children have
   their
   write side already closed.
  That's why we are able to check if we are a child or a parent with the
  fork() function.
 
  that doesn't help
 
  sunday: parent creates a pipe
  monday: parent forks for child 1. parent closes write. child 1 closes
 read.
  child 1 now can write and parent can read.
  tuesday: parent forks for child 2. child 2 can not write - pipe already
  close by parent on monday.
 Seriously?

yes,  Seriously!

 Just have the parent request another child from the child,

can't do - i do not have the source of the child. ( i do fork+exec )

 or create a
 child who's specific task is spawning the worker children,

why complicate things ? such a child takes a lot more resource than a
non-closed side of a pipe
also you have just moved the problem, instead of the parent having a non
closed write side, you have this special child with a non closed write side.

 or any of
 the other many solutions to this problem.

any thing simpler that  not closing a the 'write' side of a pipe in the
parent ?

 
 
  
   
   
Best practices are there for a reason, despite what others here
might
have
you think.
   
Shachar
   
   
   
___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
   
  
  
 
 

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: c/unix q

2013-06-06 Thread Shachar Shemesh
On 06/06/13 18:14, Erez D wrote:


 Just have the parent request another child from the child,

 can't do - i do not have the source of the child. ( i do fork+exec )
Remind me why you insist on one pipe for all children? What are you
going to do if the children's lifetime overlaps? What is wrong with the
so standard practice of pipe, fork, child closes write, child closes
read  exec. Parent read until EOF and then closes read. Repeat daily?

Shachar
___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


c/unix q

2013-06-04 Thread Erez D
hello

using the usual pipe()+fork()+dup()+close() to fork a child process and
pipe data from and to it,

I  know both the child and parent must close the unused fds.

why ?
what if i don't close the unsed fds ?


thanks,
erez.
___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: c/unix q

2013-06-04 Thread ronys
Nothing. You're just wasting resources (file descriptors) and making your
code a bit harder to understand and maintain.

Note that for pipe(), you can use both fds at both ends of the pipe, but
it's very easy to get into a race condition.Better to open a pair of pipes,
one for each direction (of course, you now need to worry about
deadlocks...).

Rony


On Tue, Jun 4, 2013 at 2:24 PM, Erez D erez0...@gmail.com wrote:

 hello

 using the usual pipe()+fork()+dup()+close() to fork a child process and
 pipe data from and to it,

 I  know both the child and parent must close the unused fds.

 why ?
 what if i don't close the unsed fds ?


 thanks,
 erez.


 ___
 Linux-il mailing list
 Linux-il@cs.huji.ac.il
 http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il




-- 
Ubi dubium, ibi libertas (where there is doubt, there is freedom)
___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: c/unix q

2013-06-04 Thread Geoffrey S. Mendelson

On 06/04/2013 02:43 PM, ronys wrote:

Nothing. You're just wasting resources (file descriptors) and making
your code a bit harder to understand and maintain.



It kind of says to anyone reading the code that you put the minimum into 
creating it you could, and implies there are details that were not 
addressed.


Of course I'm an old assembly language programmer, where everything is 
declared, nothing is left to default, and anything allocated  or opened 
is explicitly freed or closed when you are done with it.


Geoff.


--
Geoffrey S. Mendelson,  N3OWJ/4X1GM/KBUH7245/KBUW5379
It's Spring here in Jerusalem!!!

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: c/unix q

2013-06-04 Thread Amos Shapira
Bt. Wrong.

If the unused side of the pipe is left open by the process which doesn't
read it then it will be considered as open even if the other side closed
it, therefore preventing the reading process from receiving the EOF mark
(read(2) returning zero bytes).

And just to backup my claim above - see a more comprehensive response here:
http://stackoverflow.com/a/976087


On 4 June 2013 21:43, ronys ro...@gmx.net wrote:

 Nothing. You're just wasting resources (file descriptors) and making your
 code a bit harder to understand and maintain.

 Note that for pipe(), you can use both fds at both ends of the pipe, but
 it's very easy to get into a race condition.Better to open a pair of pipes,
 one for each direction (of course, you now need to worry about
 deadlocks...).

 Rony


 On Tue, Jun 4, 2013 at 2:24 PM, Erez D erez0...@gmail.com wrote:

 hello

 using the usual pipe()+fork()+dup()+close() to fork a child process and
 pipe data from and to it,

 I  know both the child and parent must close the unused fds.

 why ?
 what if i don't close the unsed fds ?


 thanks,
 erez.


 ___
 Linux-il mailing list
 Linux-il@cs.huji.ac.il
 http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il




 --
 Ubi dubium, ibi libertas (where there is doubt, there is freedom)

 ___
 Linux-il mailing list
 Linux-il@cs.huji.ac.il
 http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il




-- 
 [image: View my profile on LinkedIn]
http://www.linkedin.com/in/gliderflyer
___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: c/unix q

2013-06-04 Thread Amos Shapira
On 4 June 2013 21:43, ronys ro...@gmx.net wrote:

 Nothing. You're just wasting resources (file descriptors) and making your
 code a bit harder to understand and maintain.

 Note that for pipe(), you can use both fds at both ends of the pipe, but
 it's very easy to get into a race condition.Better to open a pair of pipes,
 one for each direction (of course, you now need to worry about
 deadlocks...).


And about this one (race conditions) - any two processes using pipes (which
have limited buffer size) to talk to each other bi-directionally run the
risk of a deadlock if not coded carefully since they can easily reach a
point where both of them block on write(2) which will only unblock when the
other side read(2)'s and frees up space in the buffer (but the other side
is blocked on a write - that's why it's called a deadlock). Typical ways
to avoid that are to create threads to watch the fd's or use none-blocking
IO.



 Rony


 On Tue, Jun 4, 2013 at 2:24 PM, Erez D erez0...@gmail.com wrote:

 hello

 using the usual pipe()+fork()+dup()+close() to fork a child process and
 pipe data from and to it,

 I  know both the child and parent must close the unused fds.

 why ?
 what if i don't close the unsed fds ?


 thanks,
 erez.


 ___
 Linux-il mailing list
 Linux-il@cs.huji.ac.il
 http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il




 --
 Ubi dubium, ibi libertas (where there is doubt, there is freedom)

 ___
 Linux-il mailing list
 Linux-il@cs.huji.ac.il
 http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il




-- 
 [image: View my profile on LinkedIn]
http://www.linkedin.com/in/gliderflyer
___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: c/unix q

2013-06-04 Thread Erez D
thanks,

so i guess if i use unidirectional connection, and the reader does not
expect to get an EOF()
thank i'm safe.

thanks,
erez.


On Tue, Jun 4, 2013 at 3:23 PM, Amos Shapira amos.shap...@gmail.com wrote:

 On 4 June 2013 21:43, ronys ro...@gmx.net wrote:

 Nothing. You're just wasting resources (file descriptors) and making your
 code a bit harder to understand and maintain.

 Note that for pipe(), you can use both fds at both ends of the pipe, but
 it's very easy to get into a race condition.Better to open a pair of pipes,
 one for each direction (of course, you now need to worry about
 deadlocks...).


 And about this one (race conditions) - any two processes using pipes
 (which have limited buffer size) to talk to each other bi-directionally run
 the risk of a deadlock if not coded carefully since they can easily reach a
 point where both of them block on write(2) which will only unblock when the
 other side read(2)'s and frees up space in the buffer (but the other side
 is blocked on a write - that's why it's called a deadlock). Typical ways
 to avoid that are to create threads to watch the fd's or use none-blocking
 IO.



 Rony


 On Tue, Jun 4, 2013 at 2:24 PM, Erez D erez0...@gmail.com wrote:

 hello

 using the usual pipe()+fork()+dup()+close() to fork a child process and
 pipe data from and to it,

 I  know both the child and parent must close the unused fds.

 why ?
 what if i don't close the unsed fds ?


 thanks,
 erez.


 ___
 Linux-il mailing list
 Linux-il@cs.huji.ac.il
 http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il




 --
 Ubi dubium, ibi libertas (where there is doubt, there is freedom)

 ___
 Linux-il mailing list
 Linux-il@cs.huji.ac.il
 http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il




 --
  [image: View my profile on LinkedIn]
 http://www.linkedin.com/in/gliderflyer

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: c/unix q

2013-06-04 Thread Yedidyah Bar David
ְAlso, you might cause other software that inherits the fds to
fail/complain/whatever.
I only mention this because just yesterday I noticed that when running
'lvs' on my
Debian wheeze laptop, I get:
File descriptor 3 (/usr/share/bash-completion/completions) leaked on
lvs invocation. Parent PID 11833: -su
File descriptor 4 (/usr/share/bash-completion/completions) leaked on
lvs invocation. Parent PID 11833: -su
File descriptor 5 (/usr/share/bash-completion/completions) leaked on
lvs invocation. Parent PID 11833: -su
and when I searched I found:
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=432986
FYI
-- 
Didi

2013/6/4 Geoffrey S. Mendelson geoffreymendel...@gmail.com

 On 06/04/2013 02:43 PM, ronys wrote:

 Nothing. You're just wasting resources (file descriptors) and making
 your code a bit harder to understand and maintain.


 It kind of says to anyone reading the code that you put the minimum into 
 creating it you could, and implies there are details that were not addressed.

 Of course I'm an old assembly language programmer, where everything is 
 declared, nothing is left to default, and anything allocated  or opened is 
 explicitly freed or closed when you are done with it.

 Geoff.


 --
 Geoffrey S. Mendelson,  N3OWJ/4X1GM/KBUH7245/KBUW5379
 It's Spring here in Jerusalem!!!


 ___
 Linux-il mailing list
 Linux-il@cs.huji.ac.il
 http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il

___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il


Re: c/unix q

2013-06-04 Thread Shachar Shemesh
On 04/06/13 15:28, Erez D wrote:
 thanks,

 so i guess if i use unidirectional connection, and the reader does not
 expect to get an EOF()
 thank i'm safe.

Why are you so keen on doing it wrong?

No, you are not safe. If the child process dies because of a
segmentation fault (or whatever), the parent will notice this through
the EOF received (I am assuming here, since you couldn't be bothered
with closing a file descriptor, that you did not install a SIGCHLD
handler to monitor for this possibility). This means that should one
process die unexpectedly, the other will hang forever.

Best practices are there for a reason, despite what others here might
have you think.

Shachar
___
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il