[Valgrind-users] Can data from checkgrind output files be directed to a network socket instead

2010-06-08 Thread Satya V. Gupta
I am using checkgrind on a system that has very little disk space. I am
wondering if someone knows how I can direct the checkgrind output files
(basic block, Function before, Function after) to a network socket instead? 

 

Thanks

 

SVG



Get Free Email with Video Mail  Video Chat!
http://www.netzero.net/freeemail?refcd=NZTAGOUT1FREM0210
--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo___
Valgrind-users mailing list
Valgrind-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/valgrind-users


Re: [Valgrind-users] Can data from checkgrind output files be directed to a network socket instead

2010-06-08 Thread John Reiser
 how can I direct the checkgrind output files
 (basic block, Function before, Function after) to a network socket instead?

If checkgrind forces its output onto named files (instead of designated
but unnamed file descriptors), then create those filenames in advance
as named pipes using mkfifo, then for each named pipe use a parallel
process (such as cp or cat) to copy the named pipe to the socket.

File descriptors can be re-directed using shell syntax.  As Fred Smith
suggested, the usual destination would be 'nc' (netcat).

nc is very versatile, but if for some reason it is not enough, then
write a small program to establish the socket, dup2(socket, 1),
and finally execve(checkgrind, ...) which will re-direct stdout
to the socket.

From within any Linux process, file descriptor k may be referenced
as the filename /proc/self/fd/k.  For instance, stderr is
/proc/self/fd/2.

[All of this is no different for 'checkgrind' than for any process.]

-- 

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Valgrind-users mailing list
Valgrind-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/valgrind-users


Re: [Valgrind-users] Can data from checkgrind output files be directed to a network socket instead

2010-06-08 Thread Bart Van Assche
On Tue, Jun 8, 2010 at 11:25 AM, Satya V. Gupta guptasa...@netzero.netwrote:

  I am using checkgrind on a system that has very little disk space. I am
 wondering if someone knows how I can direct the checkgrind output files
 (basic block, Function before, Function after) to a network socket instead?

The most convenient way to make sure that data is saved somewhere else is to
use NFS. Just mount a directory via NFS from a server (e.g. on /mnt), change
the current directory to /mnt and start checkgrind (what is checkgrind ?).

Bart.
--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo___
Valgrind-users mailing list
Valgrind-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/valgrind-users


Re: [Valgrind-users] [drd] Race condition in example/http/server2 from boost asio library?

2010-06-08 Thread Bart Van Assche
On Tue, Jun 8, 2010 at 4:25 AM, Jorge Moraleda jorge.moral...@gmail.com wrote:

 I compile example/http/server2 from boost (1.43) asio library using:
 g++ -l boost_thread -l boost_system
 /opt/boost/boost/doc/html/boost_asio/example/http/server2/*cpp

 When I run it using valgrind drd (svn r11158) using:

 valgrind --tool=drd ./a.out 127.0.0.1 31175 4 /tmp

 The output is clean until the first time I connect to the server (by
 typing http://localhost:31175/anything on a browser) at which point
 several warnings pop up as shown below. Are these false alarms, or
 actual concurrency bugs in boost asio or the example code?

 Thank you,

 Jorge

 ==15768== drd, a thread error detector
 ==15768== Copyright (C) 2006-2010, and GNU GPL'd, by Bart Van Assche.
 ==15768== Using Valgrind-3.6.0.SVN and LibVEX; rerun with -h for copyright 
 info
 ==15768== Command: ./a.out 127.0.0.1 31175 8 /tmp
 ==15768==
 ==15768== Thread 4:
 ==15768== Conflicting load by thread 4 at 0x061875f0 size 8
 ==15768==    at 0x40DB72:
 boost::asio::detail::op_queueboost::asio::detail::reactor_op::front()
 (in /tmp/a.out)
 ==15768==    by 0x417974:
 boost::asio::detail::epoll_reactor::run(bool,
 boost::asio::detail::op_queueboost::asio::detail::task_io_service_operationboost::asio::detail::epoll_reactor
 ) (in /tmp/a.out)
 [ ... ]

This looks like a race on descriptor_state::op_queue_, so I had a look
at the following source code:
http://www.boost.org/doc/libs/1_43_0/boost/asio/detail/epoll_reactor.hpp.

My comments on this source code are as follows:
* The comments at the bottom of class epoll_reactor say that any
access of registered_descriptors_ should be protected by
registered_descriptors_mutex_. However, the method shutdown_service()
modifies the container registered_descriptors_ but doesn't lock
registered_descriptors_mutex_.
* The method epoll_reactor::register_descriptor() modifies its second
argument (descriptor_data) such that it points to the newly created
descriptor_state object. All data members of the struct
descriptor_state are public, but all accesses must be guarded by a
lock on descriptor_state::mutex_. So all callers of
register_descriptor() must be checked in order to verify whether or
not there are any thread-unsafe accesses of
descriptor_state::op_queue_ or descriptor_state::shutdown_. Personally
I never recommended such a class design.
* While all accesses of the members of struct descriptor_state should
be protected by locking descriptor_state::mutex_, no lock is held on
this last mutex by register_descriptor() when it sets
descriptor_data::shutdown_ nor by shutdown_service() while it modifies
descriptor_state::op_queue_ and descriptor_state::shutdown_. The
former is easy to fix: move the descriptor_data-shutdown_ = false
statement to somewhere before the epoll_ctl() system call.

Does one of the above scenarios explain the race report you have observed ?

Bart.

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Valgrind-users mailing list
Valgrind-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/valgrind-users


Re: [Valgrind-users] [drd] Race condition in boost thread library?

2010-06-08 Thread Bart Van Assche
On Tue, Jun 8, 2010 at 4:06 AM, Jorge Moraleda jorge.moral...@gmail.com wrote:
 [ ... ]
 I recompiled boost from source using:
 bjam variant=debug define=BOOST_LOG_USE_CHAR install

 and all the above warnings are gone. When I run the program, half of
 the time I get a clean output, the other 10% of the time I get the
 following:

 ==14837== drd, a thread error detector
 ==14837== Copyright (C) 2006-2010, and GNU GPL'd, by Bart Van Assche.
 ==14837== Using Valgrind-3.6.0.SVN and LibVEX; rerun with -h for copyright 
 info
 ==14837== Command: ./a.out
 ==14837==
 ==14837== Thread 3:
 ==14837== Conflicting load by thread 3 at 0x05d8a288 size 8
 ==14837==    at 0x5B78ECE: __nptl_deallocate_tsd (pthread_create.c:153)
 [ ... ]

I had a look at the implementation in glibc of the function
__nptl_deallocate_tsd(). The function itself looks fine. So this kind
of race report is probably caused by reuse of the memory for
thread-local storage from a terminated thread by a newly created
thread. I have added a suppression pattern for the above report.

Bart.

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Valgrind-users mailing list
Valgrind-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/valgrind-users


[Valgrind-users] Clarification about memcheck and custom allocators

2010-06-08 Thread Alex Slover
Hello-

I'm using the custom memory pool macros for the first time, and though
I've read through all of Memcheck's documentation, there are a few
questions I'm still not quite clear on. I'd also like to explain my
custom memory allocation scheme, and how I'm planning on tagging it,
so someone can help me if there are any glaring mistakes. In
particular:

1. What is the relationship between the valgrind MALLOC/FREELIKE_BLOCK
and Memcheck-specific MEMPOOL_ALLOC functions? If I'm carving out a
chunk of memory, do I need to call them both? Also, am I correct in
assuming that these functions take care of marking the memory as
defined/undefined/noaccess?

2. I understand that I should use MAKE_MEM_NOACCESS to mark sections
of memory that are used internally by the allocator, but won't this
cause Memcheck to find errors whenever the allocator changes its own
state? How does Memcheck know the difference between the allocator
code and client code? Should I use VALGRIND_DISCARD for this?

3. Am I correct in assuming that CREATE_MEMPOOL only registers a pool
and its base address with valgrind? It doesn't change the layout of
memory at all?

=

The custom allocator setup is as follows:

When the allocator is created, a large arena is malloc'd. The arena
has a brief descriptor. For each object type that you wish to store in
the arena, a pool is created. Pools may consist of one or more blocks
(of constant size), and each block can hold a certain number of
frames, one frame per object. For each pool, one block is the home
block which contains descriptors for both the pool and the block,
other blocks just have a block descriptor. Thus, when you wish to
allocate space for an object, the pool associated with that object
looks through its blocks for a free frame. If none are left, it
creates a new block for that pool.

My plan was thus:

- When the arena is created, mark the entire space as NOACCESS
- Each block (not pool) corresponds to a MEMPOOL, so whenever a block
is created, CREATE_MEMPOOL is called
- When an object is allocated, MEMPOOL_ALLOC is called, and
MEMPOOL_FREE is used when it's freed

Does this seem reasonable?

Thanks for your help,
Alex

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Valgrind-users mailing list
Valgrind-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/valgrind-users


Re: [Valgrind-users] Clarification about memcheck and custom allocators

2010-06-08 Thread Dave Goodell
On Jun 8, 2010 , at 3:05 PM CDT, Alex Slover wrote:

 Hello-
 
 I'm using the custom memory pool macros for the first time, and though
 I've read through all of Memcheck's documentation, there are a few
 questions I'm still not quite clear on. I'd also like to explain my
 custom memory allocation scheme, and how I'm planning on tagging it,
 so someone can help me if there are any glaring mistakes. In
 particular:
 
 1. What is the relationship between the valgrind MALLOC/FREELIKE_BLOCK
 and Memcheck-specific MEMPOOL_ALLOC functions? If I'm carving out a
 chunk of memory, do I need to call them both? Also, am I correct in
 assuming that these functions take care of marking the memory as
 defined/undefined/noaccess?

IIRC, you should use one or the other.  Also, I remember that (in the past at 
least) you can't use MALLOC/FREELIKE_BLOCK on memory returned from the system 
malloc/free functions.  I never tried with mmap'ed memory, but I'm sure someone 
else on the list knows whether this works.  You could also find out pretty 
quickly with a small test program.

 2. I understand that I should use MAKE_MEM_NOACCESS to mark sections
 of memory that are used internally by the allocator, but won't this
 cause Memcheck to find errors whenever the allocator changes its own
 state? How does Memcheck know the difference between the allocator
 code and client code? Should I use VALGRIND_DISCARD for this?

Valgrind doesn't have any magic way to tell what accesses are made by the 
allocator and which are made by the client code.  You have to twiddle regions 
appropriately via the MAKE_MEM_{NOACCESS,UNDEFINED,DEFINED} requests to prevent 
valgrind from warning about valid accesses by the allocator.  This is what I 
did when I added valgrind client requests to the MPI handle allocator in MPICH2.

I think DISCARD is only for telling valgrind that it should stop keeping track 
of the description you gave it via CREATE_BLOCK.

 3. Am I correct in assuming that CREATE_MEMPOOL only registers a pool
 and its base address with valgrind? It doesn't change the layout of
 memory at all?

No layout of memory is changed, AFAIK.  You are just telling valgrind that you 
will use this address as a handle for grouping MEMPOOL_ALLOC/FREE requests.

 =
 
 The custom allocator setup is as follows:
 
 When the allocator is created, a large arena is malloc'd. The arena
 has a brief descriptor. For each object type that you wish to store in
 the arena, a pool is created. Pools may consist of one or more blocks
 (of constant size), and each block can hold a certain number of
 frames, one frame per object. For each pool, one block is the home
 block which contains descriptors for both the pool and the block,
 other blocks just have a block descriptor. Thus, when you wish to
 allocate space for an object, the pool associated with that object
 looks through its blocks for a free frame. If none are left, it
 creates a new block for that pool.
 
 My plan was thus:
 
 - When the arena is created, mark the entire space as NOACCESS
 - Each block (not pool) corresponds to a MEMPOOL, so whenever a block
 is created, CREATE_MEMPOOL is called
 - When an object is allocated, MEMPOOL_ALLOC is called, and
 MEMPOOL_FREE is used when it's freed
 
 Does this seem reasonable?

Seems reasonable enough.  My understanding is that the MEMPOOL is just there to 
group objects in a way that makes sense to you.  So if it makes more sense to 
group them by block instead of pool, then do that.  However, since you have 
some pool-common bookkeeping information in the home block of each pool, I 
think I would personally create a MEMPOOL for each pool.

-Dave


--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Valgrind-users mailing list
Valgrind-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/valgrind-users