Re: CVS 1.10.8 Windows Binary
Alexandre Parenteau schrieb: Kevin Greiner wrote: The only strange part is that the previous build I had, 1.10, was 897,563 bytes while this build is only 569,344. Maybe the difference between Debug and Release builds? The cvs.exe distributed with WinCvs is also a cvs-1.10.8 version + some patches (like the proxy support), is also compiled with VisualC++ 6.0 SP 3 and is only 479,232 bytes. Strange, isn't it ? Maybe you used different settings of the "optimizations" switch on the C/C++ tab? Regards Martin
Base tag for cvs tag -b [Was: Re: Feature wanted: checkin time tagging]
"Greg A. Woods" schrieb: [ On Monday, March 6, 2000 at 11:45:34 (-0600), Michael Gersten wrote: ] [...] Also, as long as people are discussing new improvements for CVS, how about this: any time you create a branch tag 'newtag', automatically create a tag 'base-newtag' at the base (it can be a nightmare if you forget). That's not a bad idea -- it's been discussed before but there's the issue of forcing naming policies on people, not to mention that some people really don't want extra tags cluttering an already cluttered repository when they don't really need them. I don't think anyone's proposed a working implementation yet either (though this one should be really trivial to do). I agree. This feature would be really useful as forgetting to tag the base may cause much pain afterwards. And I also agree that no specific naming convention should be forced on people. They should be free to choose the base tag name they want. One thing I thought of a long time ago, but never got around to doing either, is to allow multiple tags to be specified at once (I got the idea from observing that you can do this with "cvs import"). Obviously there would have to be a new option letter to specify new tags, and care would have to be take to get the meaning of '-b' right (i.e. to which tags it applies). This might solve the problem sufficiently for almost everyone I don't know if it would be technically ok, but what do you think about the following suggestion: Give the -b flag an optional parameter for the base tag (the branch tag is not a parameter to -b in the current implementation, but a parameter to cvs tag, as far as I can see). If the optional base tag is given behave like a combination of today's cvs tag [options-except-b] basetag cvs tag -b [options-except-b] branchtag If the base tag is not given ask the user if she really wants to continue this potentially dangerous / painful command (as is done by cvs release). In case she wants just behave like cvs tag -b does nowadays. Otherwise cancel the whole action. Of course you have to be careful about how to react if someone gives "cvs tag -b mytag". Should you take "mytag" as branch tag (current behaviour!) and just warn as shown above, or should you take it as parameter to -b (i.e. base tag) and complain about the missing branch tag. I would prefer the first solution as it is closer to the current behaviour. However I don't know if it's technically realizable. If the proposed syntax is impossible or unwished, you could still use a new flag letter (let's say -B as a draft hypothesis) for the base tag. The security question suggested above could be asked nevertheless. Regards Martin
Re: Unix to Dos filtering
Stephen L Arnold wrote: Wrong. I distinctly remember the samba guys (Allison Tridgell) saying they didn't want samba to attempt any such conversion. From the samba docs: Hmm... There is a FAQ (must dig it out if I can find it) that specifically mentions that smbfs honours the 'conv=' options line FAT does. I have had this problem myself, with files coming out differently when you read them through smbfs than when you read them on a windows machine. I generally FTP between unix and windows these days because the corruption problems are a nightmare (not to mention smbfs suddenly dropping the connection in the middle of a file copy and hanging the 'cp' process). Generally SMB mounts will only stay 'alive' for half an hour or so, then they will die. You have to unmount/remount to wake them up again. I know a lot of stability problems have been fixed recently, and maybe it's better now, but I still wouldn't recommend trying to access a repository through it. Tony -- Swamp frog: ribb-it ribb-it ribb-it Busch frog: bud..wis..er bud..wis..er Win95 frog: Re-boot Re-boot Re-boot [EMAIL PROTECTED]
Re: Combined Commit Emails
You should _*ALWAYS*_ at least look through commit scripts. For all you know, someone's put an "rm -rf / " in there somewhere ;). Noel [EMAIL PROTECTED] on 03/08/2000 04:34:59 AM To: [EMAIL PROTECTED] cc: (bcc: Noel L Yap) Subject: Combined Commit Emails Awhile ago I asked about combined commit emails and was told to use the stuff in the contrib directory, well, I've setup my loginfo and commitinfo as follows: commitinfo: ALL /usr/src/cvs/contrib/commit_prep logininfo: DEFAULT /usr/src/cvs/contrib/log_accum -d -m [EMAIL PROTECTED] I've most probably got these entries wrong... anyway, when I commit something I get the following: /cvsroot/html/naked/data/1999.12.inc,v -- 1999.12.inc new revision: 1.2; previous revision: 1.1 done Debug turned on... module - dir- / path - files - id - 57 Searching for log file index... found log file at 0.57, now writing tmp files. Checking current dir against last dir. Cannot open file /tmp/#cvs.lastdir.57. /tmp contans: #cvs.files.changed.0.57 #cvs.files.log.0.57 but no lastdir, do I need to edit commit_prep at all? Mark -- Mark Derricutt - Software Developer - http://www.chalice.gen.nz Getting jiggy with no audio cd present
Re: Dynamic website collaboration
On Wed, 8 Mar 2000, Herouth Maoz wrote: The normal method of working with CVS is to have a private working directory, where we change, test, and when satisifed, commit to the (internal) server where others can QA it. This is a problem: The site is a dynamic one, meaning it's definitely not simple HTML. Testing even a simple color change needs an actual web server, database connection, PHP installation, and whatnot. Having such a setup on each of the developer's computers is redundant - a maintenance nightmare. Here's what I've done: First, I came to the conclusion that the designers (as separate from "engineers") don't need to have direct, immediate access to a dynamic server as long as they can preview the HTML locally. Second, I decided that engineers did need direct access. So, the engineers each get their own separate server on the dev machine. They're traditional working areas, with no auto-update and whatnot. The port number for them is determined automatically by running: port=`echo "$LOGNAME $root" |sum |awk '{print $1}'` port=`echo "$port / 6 + 1024" |bc` on Linux, or port=`echo "$LOGNAME $root" |sum |awk '{print $1}'` port=`echo "$port + 1024" |bc` on Solaris. There's also a single shared dev server that is yet another working area, but I have a cron job that runs cvs update on it every 5 minutes: 0,5,10,15,20,25,30,35,40,45,50,55 * * * * (for dir in elements3; do cd /export/home/u/willer/$dir; nice /usr/local/bin/cvs update -dP 21 |grep -v '^[?MUP]' |grep -v 'Updating'; done) (all on one line). The designers do their stuff on their own hard drive, and use WinCVS (www.wincvs.org) to commit changes. They preview the files as local files, which works well enough for most purposes. Also, the dynamic files (in my case HTML::Embperl files) have a .html extension. This facilitates local-file preview.
Re: Binary files and subsequent updates
Aditya Sanghi writes: CVS does not attempt to merge changes into a file defined as binary. Right? [...] Dick - updates - binaryBlob 1.2 QUESTION has Dick lost all his changes? What has happened to Dick's binaryBlob 1.1?? How did CVS update the binaryBlob file in Dick's area? What happens is: cvs update: nonmergeable file needs merge cvs update: revision 1.2 from repository is now in BinaryBlob cvs update: file from working directory is now in .#BinaryBlob.1.1 That is, cvs renames Dick's version, gets the current version from the repository, and tells Dick he needs to decided what (if anything) he wants to do to merge the changes. -Larry Jones Fortunately, that was our plan from the start. -- Calvin
cvs up -C bug(s)
"cvs up -C file" doesn't work correctly if "file" has been modified both in the repo and the working directory (ie a merge is "needed"). IMO, you should wind up with a clean repo copy (ie no merge). The default repo copy should be the HEAD (for consistency -- does anyone know of an easy way to specify the base rev?). There's also a problem with sticky tags when specifying "-r" with it (specifying "-A" on the same command line doesn't seem to help). Noel
CVS, wincvs, and watch/edit permissions and ownership
Ok. I have obsered the following behaviour using CVS 1.10.8: First please understand that our website is a webpage, so we want to keep a current version always checked out on the test server, and we need to have changes take effect on the web server with as few steps as possable after the save, so that our HTML guys can see thier changes. The goal is to do an RCS type thing (everyones using the same file in the working directory and just checking in their changes), but use CVS so that devlopers on windows systems can check their stuff in directly from the winblows system. The plan is/was to use samba to allow the windows users to mount this "working" checkout directory, and then use wincvs to change permissions on the files so they can modify them. I have cvs watch turned on for all files. I did a cvs checkout of everything into a directory (the working directory) All files were checked out in my name read only. My college went into the CVS REPOSITORY SYSTEM, and ran a CVS edit on the files I had checked out. He got a permission error. He then ran CVS unedit, at which point the files he specfied changed to his ownership (cvs edit; cvs unedit changes ownership). NOte that if I try to run cvs edit after he has run cvs edit but before he runs CVS edit, I get an error that says I can't access a file in CVS/Base. This bahavior is VERY similer to what we want. Is there anyway to get the ownership of the file to change on the orignal CVS edit instead of having to run edit, unedit, edit (in order to get it owned by you and rw). If someone does this from a client and uses the repository (using samba) will the files still change permissions. Would there be anyway to slightly change the sourcecode to make this work? Thanks, Ben
CVS and Cold Fusion Studio
Does anyone have templates or something to easily do commits (with a few key strokes) from cold fusion studio?
Re: removing the need for cvs add file to contact the server....
"Greg A. Woods" wrote: Make sure that you don't permit 'cvs co mispelling' to create a new directory when 'cvs co misspelling' was intended. If you spell it right in the "CVSROOT/modules" entry then all should be well... :-) How about 'cvs new module-name module-dir local-dir' to create a new entry, and update the module file all in one? ('local-dir' might be '.'). There's no need for yet another new command. Defining the module (i.e. giving it a name and directory) in the CVSROOT/modules file should be sufficient. Ok, I had come in late, and not realized that you were first going to have to update the modules file before this would work. Even still, what's wrong with an 'all-in-one' to add something to the module file as well? (or are you of the 'if it can be in a script, it mustn't be in the regular program' camp?) Michael
Re: Dynamic website collaboration
[EMAIL PROTECTED] on 03/08/2000 10:16:41 AM CVS newbie here. Be warned... OK. We are using CVS here for web development. The normal method of working with CVS is to have a private working directory, where we change, test, and when satisifed, commit to the (internal) server where others can QA it. This is a problem: The site is a dynamic one, meaning it's definitely not simple HTML. Testing even a simple color change needs an actual web server, database connection, PHP installation, and whatnot. Having such a setup on each of the developer's computers is redundant - a maintenance nightmare. This is a necessary redundancy. Although the software (eg app server) may not need to be installed several times, it does have to be configured such that separate environments are possible. IMHO, this is even more important in web dev since s many people get to see dev mistakes (ie mistakes are even worse for image on the web). My idea was that the working directory should be handled by one correctly-configured web server. This means it's in a central place. Each of us would edit the file of his choice, return it to the central testing area, use a browser to view it from there, and once satisfied, pass it on to the middle-tier test server for QA. Or something along these lines. I think you'll wind up stepping on each others toes. It's extremely difficult (if not impossible) to test something when its not in isolation. I would suggest configuring your servers such that it supports multiple development environments. For example, each developer can have their own http port or application name. Problem here is, of course, team collaboration. We need some sort of concurrency resolution. But can we use CVS to do that, as well? I see several problems here: No, this is a very complicated communication problem. If something needs to be built, you'll have to tell everyone to stop working until the build is done. 1. CVS doesn't support a "draw one file, edit it, return it" approach very well. It doesn't have a listing function, as far as I know. Do we have to draw each file's name and location from memory? There's been several solutions to this, one of which is "cvs rdiff -s -r 0 -r HEAD module-name" or something like that. 2. How do I do the "major commit" thing? I was thinking of having our main (QA) level website on the trunk, and the testing area on a branch, and once satisfied, do a branch merge. But a branch merge is done completely from the point of branching every time, right? And to avoid that you have to branch again, or use changind tags. I have to keep in mind that some of our people use Windows and any complex command will have to be wrapped for them. And that means the tag names have to be hidden, and I think I'm getting over my head here. There is no way to just say "merge the most recent versions of the testing branch to the trunk from the point of the last merge I made" or something like that, automatically, is there? Take a look at http://www.enteract.com/~bradapp/acme/branching/ for numerous ways of using branches to achieve different results. 3. Another solution is to have the exported area (which is used by the web server) become the working area for a second CVS repository (or a separate module in the same repository). Is there a way to make sure that every time I commit a change and it gets exported to the "working area", and it is a new file, it will be "added" to the other module/CVS, and when I remove something from the first module it will be marked for removal on the second module? Is doing a "cvs add" from the loginfo file going to deadlock things if the add is done to a module in the same CVS repository? On a different repository? This is outside the scope of CVS although I don't see why it can't be done. If I understand correctly, this solution is logically the same as number 2. Does my model have any obvious drawbacks? Is there a simpler CVS-based solution to my basic problem (not wanting to do local testing)? I can't think of any, but, then again, I really don't agree with your primary goals to begin with. Local testing is the way to go until it's time for integration testing (which will really still be a kind of local testing). Noel
Re: cvs up -C bug(s)
"Noel L Yap" [EMAIL PROTECTED] writes: "cvs up -C file" doesn't work correctly if "file" has been modified both in the repo and the working directory (ie a merge is "needed"). IMO, you should wind up with a clean repo copy (ie no merge). The default repo copy should be the HEAD (for consistency -- does anyone know of an easy way to specify the base rev?). There's also a problem with sticky tags when specifying "-r" with it (specifying "-A" on the same command line doesn't seem to help). Your understanding of its intended behavior is correct -- in fact, the whole point of this option is to discard local changes. But I'm not able to reproduce the problem -- when I modify a formerly unmodified file, and then do cvs update -C file I get the clean repository copy back, exactly as it's supposed to work. This happens both locally and client server. It is also what the sanity.sh test case for update -C tests. Are you running client/server, and if so does your server, as well as the client, know about the update -C option? If yes, then can you send a detailed reproduction recipe to [EMAIL PROTECTED]? Thanks, -Karl
RE: cvs up -C bug(s)
Noel yap wrote: "cvs up -C file" doesn't work correctly if "file" has been modified both in the repo and the working directory (ie a merge is "needed"). IMO, you should wind up with a clean repo copy (ie no merge). The default repo copy should be the HEAD [smc] or the tip of the branch if the revision in your sandbox has a sticky branch tag, of course. [...]
Re: removing the need for cvs add file to contact the server....
[ On Tuesday, March 7, 2000 at 21:23:34 (-0600), Michael Gersten wrote: ] Subject: Re: removing the need for "cvs add file" to contact the server Even still, what's wrong with an 'all-in-one' to add something to the module file as well? (or are you of the 'if it can be in a script, it mustn't be in the regular program' camp?) The problem with an 'all-in-one' command to start a new module is that you're then mixing two very different kinds of actions and in doing so it is impossible to enforce a local policy that might say that only the repository administrator(s) can create new modules. Besides the act of creating a module isn't something to be done lightly -- having to manually commit the entry to the modules file is a way of trying to get the committer to think a little about their actions. I suppose one might argue that the checkin of the CVSROOT/modules file should maybe be what triggers the initial creation of the first "module root" directory in the repository. There's no really efficient way to do that right now though -- you'd essentially have to call mkdir() for every directory name on the right-hand-side and ignore the errors for already existing directories. There's also a chance that the repository administrator won't actually be permitted to commit to the module he or she creates and in that case you'd need to have an authorised committer create the directory anyway (or a system administrator would have to change the permissions and/or ownership on the resulting directory). In any case I've never seen anyone use CVS in such a way that having to do a separate commit to the modules file is even a measurable burden. People who use the modules file today have to do two steps; they have to both edit the modules file *and* create the initial directory, either by hand or with an empty "cvs import" or whatever. I'd be more than happy to elimitate just the second step and of course this also enhances the usability of CVS in client/server mode (ie.e. you don't have to login to the repository machine to manually create the directory, or play tricks with "cvs import", or whatever). -- Greg A. Woods +1 416 218-0098 VE3TCP [EMAIL PROTECTED] robohack!woods Planix, Inc. [EMAIL PROTECTED]; Secrets of the Weird [EMAIL PROTECTED]
Re: Feature wanted: checkin time tagging
On Tuesday, March 7, Michael Gersten wrote: We have a very different philosophy here. Yes we do, but in a strange way, I'm actually enjoying this debate. I'll appologize right now to any/all of you on this list behind slow/expensive connections... The ultimate idea is this: When you check out something, you get a local copy; you then track changes to it; when you are satisfied, you commit it back to where you got it. The current CVS update/commit, without built-in support for flying fish, does not support that middle step: track changes to it. Ahh, I think you need to look into tags. One place I worked, not necessarily the best way of doing it, had a number of tags (not branches), which were used to help in this regard. The head of any branch where development was going on, there was a tag which was the tag used to build off of. In other words, each night, a new head-version was checked out, built, run through tests, etc. If that version passed, the original tag (which built the time before), was moved forward to include the newly working sources. In this way, developers were "free" to commit on the head of the branch, doing their development, but a "working" version was always retrievable by updating on the tag. Branches were used to do 3 main things. 1) On product release, a branch was made, in this way, any/all patches to the product were done on that branch. Note, it is hard to hold off on doing this branch until the last possible moment, especially if there are other people working on the next version before the current on is out. The "less" stuff is on the branch you are shipping from, the easier it will be. Less code will have to be merged back to the trunk. To keep track of merges, people used tags to identify what has/has-not been merged. 2) Major rewrites, in parallel with current development. This tends to be something like a switch from C to C++, or other major things like that. It usually entails a major effort on the part of the developers working on the branch to keep up with merges from the trunk. The final merge back onto the trunk usually becomes more of a replace, than a merge. 3) Exploration. This one I've seen happen, and be abused as well. Developers trying to explore the option. Something close to #2, but potentially never merged back into the trunk. I tend to discourage this reason for branches. I like the windows find tool that combines find, xargs, and grep in one box. Yes, it's limited, but it often works faster than trying to get the correct command typed in. And, it's more of what I think of. I'm sorry to hear that. What you are telling me, is that you like to be limited by convenience... No. When I need to, I write the find ... | xargs. The windows find tool only supports looking for one type of file, or all files; it cannot say "any non-directory"; it cannot say "modified in the last 10 minutes" (but many find's cannot either; the apple one on web objects cannot). The find construct takes longer to write, and double check. Yes, it's more powerful. But it's slower to use. But this is where scripting comes into play. Or simple shell aliases if your shell supports them. The underlying tools are general enough to allow you to make as powerfull and simple an interface to them as you wish. Ok, allow me to "edit" or rearrange your e-mail a little. First you say: I like convenience. I do not like to be limited by convenience. Then later on we get to: I'm sorry if CVS does not cater to your "spoon-fed" mentality. Now that's personal. I do not have a spoon fed mentality. For this, all I can say is that "spoon-fed", and to like convenience over utility and power, are the same thing. And yes, it was personal. I'm glad you did not take too much offense at it. Think X windows. Now really think X windows, and novice computer users. Now look at how big your X-windows modifications file is, and how long it took you to build it. I lost mine when I left UCLA, that I spent years building up/improving; when it was lost, that was when I realized just how bad the X environment was. I have to think of this all the time. I'm a sysadmin for the CS department here at the university. Allow me to tell you a story, a little related to X, and making things "easier". 7-8 years ago, when I went through university, not here, things were not "spoon-fed" to you. You had to think, setup your own things, use the tools given to make an environment in which you could be both productive, and efficient. In some sense, it seperated the men from the boys, and even furthered communication and community among the CS students. In todays' university environment, we are told to "standardize", to make sure that students have a GUI to make things easier on the students. To let them focus on writing their quick-sort algorithms, etc. I keep having to remind people that the general access unix pool does not cater to students in this
Re: Enhancement suggestion: CVS rename
On Wednesday, March 8, "Cameron, Steve" wrote: Bergur Ragnarsson writes: I'm using CVS 1.10 and I am quite happy with it; however there is one very important feature missing: CVS rename Very true. 'cvs rename' is missing. There are a number of ways to implement that feature, however, I don't see much point in actually doing it, unless it is done well. A poorly implemented 'cvs rename' is worse than what is there now. At least I believe so, entirely IMHO of course. To implement rename a name-location fields needs to be added to the standard revision information (author, date, state, log). This fields can be added in several ways: * use an existing fields (i.e. author). * use special symbol (dirty !?) * add a new fields in such a way that the currrent rcs parser would ignore, i.e. not complain about - maybe not possible !? Making the rcs file format backwards incompatible should be avoided if at all possible. A very good, and I'd say necessary goal, especially with WinCVS and other things like that out there... Modifications: 1. Add name-location field that is associated with every revision (like author). The location fields would be a path name starting from CVS_ROOT - it would not include CVS_ROOT path prefix = easy to relocate repository. * the location in the latest revision would be in sync with cvs repository location * the name in the latest revision would also be the name of the file in the cvs repository 2. cvs checkout: find revision check if name-location field exists, if so check file out to that location and under that name 3. cvs rename insert pending name and location change 4. cvs commit (this should be integrated in rcs) move the updated file to the new name-location (often no change) How do you detect conflicts with this method? IE: Two different RCS files in the repo specify the same name-location. The design has two important attributes: * The cvs repository will look the same for the main branch for this new cvs and the current one. This means that when a user looks at the repository, he sees the most recent layout - which is natural. Yes, but how do you implement "cvs status"? The mapping is only forward, from RCS repo file, to sandbox file. In order to have a reverse mapping for this (unless I missunderstand), you would need to keep extra information about each file in the sandbox in a CVS/Entries or like file... * All information is still stored in one file only. This would mean extensive searching to implement "cvs status", as the current filename may not be what is there in the repo... --Toby.
RE: Unix to Dos filtering
-Original Message- From: Tony Hoyle [mailto:[EMAIL PROTECTED]] Sent: Wednesday, March 08, 2000 4:05 AM To: [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Subject: Re: Unix to Dos filtering Stephen L Arnold wrote: Wrong. I distinctly remember the samba guys (Allison Tridgell) saying they didn't want samba to attempt any such conversion. From the samba docs: Hmm... There is a FAQ (must dig it out if I can find it) that specifically mentions that smbfs honours the 'conv=' options line FAT does. [snip] smbfs and samba are two different (but related) things (and are typically merged in most people's minds). smbfs is the smb filesystem that linux/unix/whatever needs in order to mount smb shares exported from other hosts. Samba is a pair of daemons (smbd and nmbd) that provide a non-windoze host with the ability to export smb shares to other hosts on the network. Samba also does browse-master, domain logins, some domain control stuff, smb print services, etc. I believe you are right about smbfs honoring the 'conv=' stuff, however, you didn't say "smbfs" you said "samba" (so I had to chime in ;) Steve with Std.Disclaimer; use Std.Disclaimer;
Re: Segfault for CVS over SSH
Quoting Larry Jones [EMAIL PROTECTED]: A traceback from the debugger will be much more useful than straces. Here is the backtrace from the child process on the server side (when connecting via ssh or kerberized rsh with an :ext: CVSROOT). If only one file is committed or has changed, everything works great. cvs server only dies when multiple files are committed. The problem shows up on line 320 of hash.c from cvs 1.10.8. I am sure that the root cause occurs somewhere else, but I am not familiar enough with the code to spot it immediately. Hopefully, it is obvious to someone. If someone has any pointers of how to proceed, please let me know. Thanks! benjy #0 strcmp (p1=0x0, p2=0x8188e88 "/tmp/cvs-serv5301/foo.test") at ../sysdeps/generic/strcmp.c:38 38 in ../sysdeps/generic/strcmp.c #1 0x80619e1 in findnode (list=0x8188b00, key=0x8188e88 "/tmp/cvs-serv5301/foo.test") at hash.c:320 320 if (strcmp (p-key, key) == 0) (gdb) print *p $40 = {type = HEADER, next = 0x0, prev = 0x0, hashnext = 0x8188e00, hashprev = 0x8188e00, key = 0x0, data = 0x0, delproc = 0} (gdb) print *head $42 = {type = UPDATE, next = 0x8184718, prev = 0x81848c8, hashnext = 0x8188db8, hashprev = 0x8188db8, key = 0x8184b58 "foo.test", data = 0x8188e28 "\004", delproc = 0x805a764 update_delproc} #2 0x806117b in lookup_file_by_inode ( filepath=0x8188e88 "/tmp/cvs-serv5301/foo.test") at hardlink.c:99 99 p = findnode ((List *) hp-data, filepath); #3 0x8058838 in check_fileproc (callerdat=0x0, finfo=0xba38) at commit.c:1048 1048linkp = lookup_file_by_inode (fullpath); #4 0x807f334 in do_file_proc (p=0x8180328, closure=0xba2c) at recurse.c:821 821 ret = frfile-frame-fileproc (frfile-frame-callerdat, finfo); #5 0x8061aaa in walklist (list=0x8180098, proc=0x807f1f0 do_file_proc, closure=0xba2c) at hash.c:370 370 err += proc (p, closure); #6 0x807f10e in do_recursion (frame=0xbae0) at recurse.c:725 725 err += walklist (filelist, do_file_proc, frfile); #7 0x807fbea in unroll_files_proc (p=0x8124c48, closure=0xbae0) at recurse.c:1194 1194err += do_recursion (frame); #8 0x8061aaa in walklist (list=0x817fe30, proc=0x807fa60 unroll_files_proc, closure=0xbae0) at hash.c:370 370 err += proc (p, closure); #9 0x807eccc in start_recursion (fileproc=0x80580a4 check_fileproc, filesdoneproc=0x8058a74 check_filesdoneproc, direntproc=0x80588c8 check_direntproc, dirleaveproc=0, callerdat=0x0, argc=2, argv=0x8124ab8, local=0, which=1, aflag=0, readlock=0, update_preload=0x0, dosrcs=1) at recurse.c:343 343 err += walklist (files_by_dir, unroll_files_proc, (void *) frame); #10 0x8057d3f in commit (argc=2, argv=0x8124fdc) at commit.c:650 650 err = start_recursion (check_fileproc, check_filesdoneproc, #11 0x80863b8 in do_cvs_command (cmd_name=0x810cb6d "commit", command=0x80574b8 commit) at server.c:2753 2753exitstatus = (*command) (argument_count, argument_vector); #12 0x8087242 in serve_ci (arg=0x8124742 "") at server.c:3391 3391do_cvs_command ("commit", commit); #13 0x80890ab in server (argc=1, argv=0xbe18) at server.c:5042 5042(*rq-func) (cmd); #14 0x806d55b in main (argc=1, argv=0xbe18) at main.c:1008 1008err = (*(cm-func)) (argc, argv); === If I commit multiple files or you commit the directory when multiple files have been changed, I get a segfault. [Note: this was from a different run than the above backtrace.] Breakpoint 1, findnode (list=0x8189540, key=0x81898c8 "/tmp/cvs-serv5437/foo.test") at hash.c:320 320 if (strcmp (p-key, key) == 0) (gdb) step strcmp (p1=0x0, p2=0x81898c8 "/tmp/cvs-serv5437/foo.test") at ../sysdeps/generic/strcmp.c:32 32 ../sysdeps/generic/strcmp.c: No such file or directory. (gdb) step 33 in ../sysdeps/generic/strcmp.c (gdb) step 38 in ../sysdeps/generic/strcmp.c (gdb) step Program received signal SIGSEGV, Segmentation fault. strcmp (p1=0x0, p2=0x81898c8 "/tmp/cvs-serv5437/foo.test") at ../sysdeps/generic/strcmp.c:38 38 in ../sysdeps/generic/strcmp.c (gdb) info f Stack level 0, frame at 0xb8ec: eip = 0x400a0bb0 in strcmp (../sysdeps/generic/strcmp.c:38); saved eip 0x80619e1 called by frame at 0xb908 source language c. Arglist at 0xb8ec, args: p1=0x0, p2=0x81898c8 "/tmp/cvs-serv5437/foo.test" Locals at 0xb8ec, Previous frame's sp is 0x0 Saved registers: ebp at 0xb8ec, esi at 0xb8e8, eip at 0xb8f0 = For comparison, here is what it looks like when the commit succeeds. This case works fine: cvs -z3 commit -m "benjy: testing" foo.test If only one file has been modified, this works
RE: Binary files and subsequent updates
-Original Message- From: Aditya Sanghi [mailto:[EMAIL PROTECTED]] Sent: Wednesday, March 08, 2000 8:54 PM To: [EMAIL PROTECTED] Subject: Binary files and subsequent updates CVS does not attempt to merge changes into a file defined as binary. Right? So... Tom - checksout - binaryBlob 1.1 Dick - checksout - binaryBlob 1.1 Tom changess binaryBlob 1.1 in his own area. Dick changes binaryBlob 1.1 in his own area. Tom commits binaryBlob 1.1 = repository has binaryBlob 1.2 Dick commits binaryBlob 1.1 = NO = repository asks Dick to update his binaryBlob 1.1 with 1.2 Dick - updates - binaryBlob 1.2 QUESTION has Dick lost all his changes? What has happened to Dick's binaryBlob 1.1?? How did CVS update the binaryBlob file in Dick's area? Dick will never lose the changes. He will be prompted with flags like M or C or U to denote the changes made by him, others or a conflict situation in which the same lines are changed by somebody else. The Dick can decide upon the changes. Dick's original file will be preserved as .#filename Rgds Sanjay Harwani What is the solution for this? How do developers working on forms (e.g. Visual Basic forms , Delphi forms ) handle this situation?? Thanks in advance,. aditya
Re: cvs up -C bug(s)
Noel L Yap writes: I'm running client/server cvs-1.10.8 (with some "cvs edit" patches). The command ("cvs up -C file") works fine if noone has yet checked in "file" (after the local copy had been checked out). However, it doesn't work in the following situation: user1user2 1. cvs co module cvs co module 2. cd modulecd module 3. # modify file # modify file 4. cvs ci file 5. cvs up -C file Upon step 5, user1 gets: (Locally modified loginfo moved to .#file.1.1) cvs [server aborted]: cannot open loginfo for copying: No such file or directory If you use a local repository rather than client/server you get: retrieving revision 1.1 retrieving revision 1.2 Merging differences between 1.1 and 1.2 into file rcsmerge: warning: conflicts during merge cvs update: conflicts found in file C file You get similar results if the file has a sticky tag or date and you update to a different revision. In my opinion, you should never get merging. If you specify a particular revision or date, that's the version you should get and it should be sticky. Otherwise, if there's already a stick tag or date on the file and you don't specify -A, you should get the sticky revision. Otherwise, you should get the head version (with no sticky revision). I don't have a clue as to what -j should do in combination with -C. -Larry Jones From now on, I'm devoting myself to the cultivation of interpersonal relationships. -- Calvin