Re: [fossil-users] error in commit

2016-12-05 Thread Aldo Nicolas Bruno
It worked perfectly! thanks!
Also the timeline clearly shows what happened ;)

https://pizzahack.eu/fossil/thunderchez/timeline

On 05/12/2016 23:01, Tony Papadimitriou wrote:
> The following steps should work assuming you haven't added any more
> commits
> after the mistake (and haven't pushed your changes anywhere else).  Also,
> your current checkout is the one with the mistake (if not, first do F
> UP to
> the appropriate check-in):
>
> f co prev --keep
> f pur ch tip
> f com file1 file2 file3 ...
> f pur o 1
> f reb
>
> (where f = fossil)
> First line goes back one check-in without changing any of your files on
> disk.
> Second line purges the mistake (and gives a number -- usually 1 if no
> more
> purges pending -- use this number later)
> Third line commits the files you meant to commit (file1 file2 file3 ...)
> Fourth line completely kills the purged content from the repo (use the
> number you got in step 2)
> Final line rebuilds the repo and removed the purged content.
>
> I get into this situation often, and these steps (if done right after the
> mistake -- and not many actions later) work well.
>
> (If anyone has a quicker method, please enlighten us.)
>
> -Original Message- From: Aldo Nicolas Bruno
> Sent: Monday, December 05, 2016 11:44 PM
> To: Fossil SCM user's discussion
> Subject: Possible Spam(10.041):[fossil-users] error in commit
>
> Hi,
> By mistake I've made an error and did fossil commit -m "added lalal"
> without specifying which files to commit, so it commited all the changed
> files... but my intention was to commit only some files...
> There is a way to elegantly undo the commit or modify it so as to
> exclude some files?
> Thanks
> Aldo
> ___
> fossil-users mailing list
> fossil-users@lists.fossil-scm.org
> http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
> ___
> fossil-users mailing list
> fossil-users@lists.fossil-scm.org
> http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] error in commit

2016-12-05 Thread Tony Papadimitriou

The following steps should work assuming you haven't added any more commits
after the mistake (and haven't pushed your changes anywhere else).  Also,
your current checkout is the one with the mistake (if not, first do F UP to
the appropriate check-in):

f co prev --keep
f pur ch tip
f com file1 file2 file3 ...
f pur o 1
f reb

(where f = fossil)
First line goes back one check-in without changing any of your files on
disk.
Second line purges the mistake (and gives a number -- usually 1 if no more
purges pending -- use this number later)
Third line commits the files you meant to commit (file1 file2 file3 ...)
Fourth line completely kills the purged content from the repo (use the
number you got in step 2)
Final line rebuilds the repo and removed the purged content.

I get into this situation often, and these steps (if done right after the
mistake -- and not many actions later) work well.

(If anyone has a quicker method, please enlighten us.)

-Original Message- 
From: Aldo Nicolas Bruno

Sent: Monday, December 05, 2016 11:44 PM
To: Fossil SCM user's discussion
Subject: Possible Spam(10.041):[fossil-users] error in commit

Hi,
By mistake I've made an error and did fossil commit -m "added lalal"
without specifying which files to commit, so it commited all the changed
files... but my intention was to commit only some files...
There is a way to elegantly undo the commit or modify it so as to
exclude some files?
Thanks
Aldo
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users 


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] error in commit

2016-12-05 Thread Tony Papadimitriou
The following steps should work assuming you haven't added any more commits 
after the mistake (and haven't pushed your changes anywhere else).  Also, 
your current checkout is the one with the mistake (if not, first do F UP to 
the appropriate check-in):


f co prev --keep
f pur ch tip
f com file1 file2 file3 ...
f pur o 1
f reb

(where f = fossil)
First line goes back one check-in without changing any of your files on 
disk.
Second line purges the mistake (and gives a number -- usually 1 if no more 
purges pending -- use this number later)

Third line commits the files you meant to commit (file1 file2 file3 ...)
Fourth line completely kills the purged content from the repo (use the 
number you got in step 2)

Final line rebuilds the repo and removed the purged content.

I get into this situation often, and these steps (if done right after the 
mistake -- and not many actions later) work well.


-Original Message- 
From: Aldo Nicolas Bruno

Sent: Monday, December 05, 2016 11:44 PM
To: Fossil SCM user's discussion
Subject: Possible Spam(10.041):[fossil-users] error in commit

Hi,
By mistake I've made an error and did fossil commit -m "added lalal"
without specifying which files to commit, so it commited all the changed
files... but my intention was to commit only some files...
There is a way to elegantly undo the commit or modify it so as to
exclude some files?
Thanks
Aldo
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users 


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] error in commit

2016-12-05 Thread Aldo Nicolas Bruno
Hi,
By mistake I've made an error and did fossil commit -m "added lalal"
without specifying which files to commit, so it commited all the changed
files... but my intention was to commit only some files...
There is a way to elegantly undo the commit or modify it so as to
exclude some files?
Thanks
Aldo
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] Bug report: Terrible Performance, when Checking in LLVM Source

2016-12-05 Thread Warren Young
On Dec 4, 2016, at 12:50 PM, Karel Gardas  wrote:
> 
> On Sun, Dec 4, 2016 at 5:28 AM, Martin Vahi  wrote:
>> It is about 4.4GiB, over 100k files, over 6k folders,
>> but it should not be that bad. After all, that's what
>> many projects look like in 2016.
> 
> This statement is IMHO a bit unfair. You basically grab *4*
> subversions trees (if I count well) and you smash that to one big tree
> and commit into fossil repo. So if you expect speed of svn, then
> please compare fairly independent subversion trees with independent
> fossil trees.

Actually, you’d have to compare Fossil against checking out every single 
revision from each separate Subversion repository:

$ for r in $(seq 1 $maxrev) ; do svn co -r $r … ; done

A Subversion checkout just gets you the tip of trunk by default, and you have 
to go back to the server in order to walk back through history.  Fossil grabs 
everything.

There are plans laid out on the mailing list for making Fossil do both shallow 
and narrow checkouts.  As it stands, Fossil always gives you a 100% wide and 
100% deep checkout.

Subversion allows both: it gives shallow checkouts always, and you can ask for 
a narrow checkout by specifying a subdirectory within the repo, grabbing only 
that slice.

It’s a good idea.  Someone just has to write it.
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] Repo Checksum Speedup Idea: flaw in my comment

2016-12-05 Thread Martin Vahi
As it turns out, I already made a mistake
at the tree based algorithm.

The old, proposed, flawed version:
> ...
> array_of_nodes_with_wrong_x_node_hash=unique_by_node_ID(
>   clone(
>   ob_AVL_tree.array_of_nodes_that_had_changes_on_path_2_root
>   ).concat(clone(
>   ob_AVL_tree.array_of_nodes_that_have_changed_children
>   // change within children means that any
>   // of the children changed during the insertion
>   // or removal of nodes to/from the ob_AVL_tree
>   // after the AVL-tree got (automatically) rebalanced.
>   // A change between null and an existing child is
>   // also considered a change.
>   ))
>   ).sort_by_path_length_from_node_to_root_node
> 
> ...

A newer version looks only the changes within
children:

array_of_nodes_with_wrong_x_node_hash=unique_by_node_ID(
  clone(
  ob_AVL_tree.array_of_nodes_that_have_changed_children
  )).sort_by_path_length_from_node_to_root_node

Thank You.

___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] Repo Checksum Speedup Idea

2016-12-05 Thread Martin Vahi
>   De : Joerg Sonnenberger 
>  À : fossil-users@lists.fossil-scm.org
>  Envoyé le : Dimanche 4 décembre 2016 20h55
>  Objet : Re: [fossil-users] Bug report: Terrible Performance, when
Checking in LLVM Source
> ...
> No. What repo checksum does is compute a separate checksum over the
> concatenation of all files.
>
> Joerg
> ...

Thank You all for the answers to my previous
bug report letters. I have not made up my mind,
how to proceed with the large repo case, but
2 things I know for certain:

certain_thing_1)
Hash algorithms will evolve and whatever
Fossil uses in 2016_12 for the checksum,
will become obsolete.

certain_thing_2)
Regardless of what the hash algorithm is,
there exists at least one solution, that
allows to calculate a checksum of a
concatenation of a large set of files
without re-calculating the hashes of all
of the files that did not change.


The naive and slow version:

array_of_relative_file_paths=[ file_1, file_2, ..., file_N ]
blob_1=null
i_len=array_of_relative_file_paths.length
if 0,
// the blob_1 needs to be, if not allocated like
// in the above pseudocode, then at least fed in to the hash(...)
// in some stream fashion and the hash(...) has to
// re-process the file_1, file_2, ...


The gist of the proposed idea is to place
various hashes to a tree and then hash both, files
and hashes of the files, giving a probabilistic opportunity
to avoid running the hash function on all of the files
after the collection of files has changed:

ob_AVL_tree=AVL_tree_class.new

function_instance_1=(f_arg_ob_node_reference,f_arg_x_file_hash){
x_hash_left=hash(null)
x_hash_right=hash(null)

ob_child_left=f_arg_ob_node_reference.get_child_left()
ob_child_right=f_arg_ob_node_reference.get_child_right()
if ob_child_left != null {
x_hash_left=ob_child_left.record.x_node_hash
} // if
if ob_child_right != null {
x_hash_right=ob_child_right.record.x_node_hash
} // if

x_bytestream=concat(x_hash_left,
"_separator_",
x_hash_right
"_separator_",
f_arg_x_file_hash)
return hash(x_bytestream)
}

array_of_relative_file_paths=[ file_1, file_2, ..., file_N ]
i_len=array_of_relative_file_paths.length
if 0