Okay, more git bashing...

After losing 3 hours' work today from something as simple as "git stash
save" (where git stashed 3000+ untracked/generated files, despite the docs
saying it doesn't do that), then not being able to do "stash apply"
(because "file already exists..." for 3000+ files), and having to construct
the work from screenshots of the diff i was able to grab before deleting
and checking out again (as one so often has to do with git), i went
searching for "data loss in git" and stumbled across this page near the top
of the results:

http://www.cs.cmu.edu/~davide/howto/git_lose.html

What really makes that worth reading is the list of suggestions at the end
of the page. They start out with this little gem:


   -

   Internalize the concept that git is *designed* to forget things. If you
   haven't seen something reach another repository, maybe it didn't. Heck,
   even if you *did* see it go somewhere else, maybe it fell out of the
   historical record there and then got garbage-collected.


It makes me sick to no end that people accept that so readily, and then go
back for a second helping.

It occurred to me today that in nearly 31 years of using a computer i have,
in total, lost more data to git (while following the instructions!!!) than
any other single piece of software. Also concluded is that git is the only
SCM out there which makes SCM difficult for the simple stuff. Even RCS is
simpler to use. Sure CVS has limits, but respect those limits and it works
just fine. Never lost a line of code in CVS.

...


-- 
----- stephan beal
http://wanderinghorse.net/home/stephan/
http://gplus.to/sgbeal
"Freedom is sloppy. But since tyranny's the only guaranteed byproduct of
those who insist on a perfect world, freedom will have to do." -- Bigby Wolf
_______________________________________________
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

Reply via email to