[fossil-users] Help improve bot exclusion

2012-10-30 Thread Richard Hipp
A Fossil website for a project with a few thousand check-ins can have a lot
of hyperlinks.  If a spider or bot starts to walk that site, it will visit
literally hundreds of thousand or perhaps millions of pages, many of which
are things like vdiff and annotate which are computationally expensive
to generate or like zip or tarball which give multi-megabyte replies.
If you get a lot of bots walking a Fossil site, it can really load down the
CPU and run up bandwidth charges.

To prevent this, Fossil uses bot-exclustion techniques.  First it looks at
the USER_AGENT string in the HTTP header and uses that to distinguish bots
from humans.  Of course, a USER_AGENT string is easily forged, but most
bots are honest about who they are so this is a good initial filter.  (The
undocumented fossil test-ishuman command can be used to experiment with
this bot discriminator.)

The second line of defense is that hyperlinks are disabled in the
transmitted HTML.  There is no href= attribute on the a tags.  The href=
attributes are added by javascript code that runs after the page has been
loaded.  The idea here is that a bot can easily forge a USER_AGENT string,
but running javascript code is a bit more work and even malicious bots
don't normally go to that kind of trouble.

So, then, to walk a Fossil website, an agent has to (1) present a
USER_AGENT string from a known friendly web browser and (2) interpret
Javascript.

This two-phase defense against bots is usually effective.  But last night,
a couple of bots got through on the SQLite website.  No great damage was
done as we have ample bandwidth and CPU reserves to handle this sort of
thing.  Even so, I'd like to understand how they got through so that I
might improve Fossil's defenses.

The first run on the SQLite website originated in Chantilly, VA and gave a
USER_AGENT string as follows:

Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0;
SLCC2; .NET_CLR 2.0.50727; .NET_CLR 3.5.30729; .NET_CLR 3.0.30729;
Media_Center_PC 6.0; .NET4.0C; WebMoney_Advisor; MS-RTC_LM_8)

The second run came from Berlin and gives this USER_AGENT:

Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)

Both sessions started out innocently.  The logs suggest that there really
was a human operator initially.  But then after about 3 minutes of normal
browsing, each session starts downloading every hyperlink in sight at a
rate of about 5 to 10 pages per second.  It is as if the user had pressed a
Download Entire Website button on their browser.  Question:  Is there
such a button in IE?

Another question:  Are significant numbers of people still using IE6 and
IE7?  Could we simply change Fossil to consider IE prior to version 8 to be
a bot, and hence not display any hyperlinks until the user has logged in?

Yet another question:  Is there any other software on Windows that I am not
aware of that might be causing the above behaviors?  Are there plug-ins or
other tools for IE that will walk a website and download all its content?

Finally: Do you have any further ideas on how to defend a Fossil website
against runs such as the two we observed on SQLite last night?

Tnx for the feedback
-- 
D. Richard Hipp
d...@sqlite.org
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] Help improve bot exclusion

2012-10-30 Thread Arjen Markus

On Tue, 30 Oct 2012 06:17:05 -0400
 Richard Hipp d...@sqlite.org wrote:



This two-phase defense against bots is usually 
effective.  But last night,
a couple of bots got through on the SQLite website.  No 
great damage was
done as we have ample bandwidth and CPU reserves to 
handle this sort of
thing.  Even so, I'd like to understand how they got 
through so that I

might improve Fossil's defenses.

The first run on the SQLite website originated in 
Chantilly, VA and gave a

USER_AGENT string as follows:

   Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; 
WOW64; Trident/5.0;
SLCC2; .NET_CLR 2.0.50727; .NET_CLR 3.5.30729; .NET_CLR 
3.0.30729;
Media_Center_PC 6.0; .NET4.0C; WebMoney_Advisor; 
MS-RTC_LM_8)


The second run came from Berlin and gives this 
USER_AGENT:


   Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)

Both sessions started out innocently.  The logs suggest 
that there really
was a human operator initially.  But then after about 3 
minutes of normal
browsing, each session starts downloading every 
hyperlink in sight at a
rate of about 5 to 10 pages per second.  It is as if the 
user had pressed a
Download Entire Website button on their browser. 
Question:  Is there

such a button in IE?


I just tried it: you can save a URL as a single web page 
or a web archive (extension .wht,
whatever that means). So it seems quite possible - and it 
appears to be the default when

using save as.

This was with IE 8.

Regards,

Arjen



DISCLAIMER: This message is intended exclusively for the addressee(s) and may 
contain confidential and privileged information. If you are not the intended 
recipient please notify the sender immediately and destroy this message. 
Unauthorized use, disclosure or copying of this message is strictly prohibited.
The foundation 'Stichting Deltares', which has its seat at Delft, The 
Netherlands, Commercial Registration Number 41146461, is not liable in any way 
whatsoever for consequences and/or damages resulting from the improper, 
incomplete and untimely dispatch, receipt and/or content of this e-mail.




___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] Help improve bot exclusion

2012-10-30 Thread Lluís Batlle i Rossell
On Tue, Oct 30, 2012 at 06:17:05AM -0400, Richard Hipp wrote:
 Finally: Do you have any further ideas on how to defend a Fossil website
 against runs such as the two we observed on SQLite last night?

This problem affects almost any web software, and I think that job is delegated
to robots.txt. Isn't this approach good enough? And in the particular case of
the fossil standalone server, it could serve a robots.txt.

How do programs like 'viewcvs' or 'viewsvn' deal with that?

Regards,
Lluís.
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] Help improve bot exclusion

2012-10-30 Thread Kees Nuyt
On Tue, 30 Oct 2012 06:17:05 -0400, Richard Hipp d...@sqlite.org wrote:

[...]

 Both sessions started out innocently.  The logs suggest that there really
 was a human operator initially.  But then after about 3 minutes of normal
 browsing, each session starts downloading every hyperlink in sight at a
 rate of about 5 to 10 pages per second. It is as if the user had pressed a
 Download Entire Website button on their browser.  Question:  Is there
 such a button in IE?

No, just save page as  It will not follow hyperlinks, only save
html and embedded resources, like images.

 Another question:  Are significant numbers of people still using IE6 and
 IE7?  Could we simply change Fossil to consider IE prior to version 8 to be
 a bot, and hence not display any hyperlinks until the user has logged in?

I don't think it would help much. Newer versions will potentially run
the same add-ons.

By the way, over 5% of the population still use these older versions.
http://stats.wikimedia.org/archive/squid_reports/2012-09/SquidReportClients.htm

 Yet another question:  Is there any other software on Windows that I am not
 aware of that might be causing the above behaviors?  Are there plug-ins or
 other tools for IE that will walk a website and download all its content?

There are several browser add-ons that will try to walk complete
websites, e.g.:
http://www.winappslist.com/download_managers.htm
http://www.unixdaemon.net/ie-plugins.html

One can also think of validator tools.

Standalone programs usually will not run javascript.


 Finally: Do you have any further ideas on how to defend a Fossil website
 against runs such as the two we observed on SQLite last night?

Perhaps the href javascript should run onfocus, rather than onload?
(untested)

Other defenses could use DoS defense techniques, like not honouring (or
agressively delay responses to) more than a certain number of requests
within a certain time, which is not nice, because the server would have
to maintain (more) session state.

Sidenote:
As far as I can tell several modern browsers have a read ahead option,
that will try to load more pages of the site before a link is clicked.
https://developers.google.com/chrome/whitepapers/prerender
Those will not walk a whole site though.

-- 
Groet, Cordialement, Pozdrawiam, Regards,

Kees Nuyt

___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] Help improve bot exclusion

2012-10-30 Thread Richard Hipp
On Tue, Oct 30, 2012 at 6:23 AM, Lluís Batlle i Rossell vi...@viric.namewrote:

 On Tue, Oct 30, 2012 at 06:17:05AM -0400, Richard Hipp wrote:
  Finally: Do you have any further ideas on how to defend a Fossil website
  against runs such as the two we observed on SQLite last night?

 This problem affects almost any web software, and I think that job is
 delegated
 to robots.txt. Isn't this approach good enough?


Robots.txt only works over an entire domain.  If your Fossil server is
running as CGI within that domain, you can manually modify your robots.txt
file to exclude all or part of the fossil URI space.  But as that file is
not under control of Fossil, you have to make this configuration yourself -
Fossil cannot help you.  This burden can become acute when you are managing
many dozens or even hundreds of Fossil repositories.  An automatic system
is better.



 And in the particular case of
 the fossil standalone server, it could serve a robots.txt.

 How do programs like 'viewcvs' or 'viewsvn' deal with that?

 Regards,
 Lluís.
 ___
 fossil-users mailing list
 fossil-users@lists.fossil-scm.org
 http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users




-- 
D. Richard Hipp
d...@sqlite.org
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] Help improve bot exclusion

2012-10-30 Thread Bernd Paysan
Am Dienstag, 30. Oktober 2012, 08:20:14 schrieb Richard Hipp:
 On Tue, Oct 30, 2012 at 6:23 AM, Lluís Batlle i Rossell
vi...@viric.namewrote:
  On Tue, Oct 30, 2012 at 06:17:05AM -0400, Richard Hipp wrote:
   Finally: Do you have any further ideas on how to defend a Fossil website
   against runs such as the two we observed on SQLite last night?
 
  This problem affects almost any web software, and I think that job is
  delegated
  to robots.txt. Isn't this approach good enough?

 Robots.txt only works over an entire domain.  If your Fossil server is
 running as CGI within that domain, you can manually modify your robots.txt
 file to exclude all or part of the fossil URI space.  But as that file is
 not under control of Fossil, you have to make this configuration yourself -
 Fossil cannot help you.  This burden can become acute when you are managing
 many dozens or even hundreds of Fossil repositories.  An automatic system
 is better.

The search engine crawlers do honor the robots meta-tag:

http://www.robotstxt.org/meta.html

Adding this is a piece of cake (just change the page template), but it doesn't
help against malware.

--
Bernd Paysan
If you want it done right, you have to do it yourself
http://bernd-paysan.de/


signature.asc
Description: This is a digitally signed message part.
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] Help improve bot exclusion

2012-10-30 Thread Kees Nuyt
[Default] On Tue, 30 Oct 2012 06:17:05 -0400, Richard Hipp
d...@sqlite.org wrote:

 Finally: Do you have any further ideas on how to defend a Fossil website
 against runs such as the two we observed on SQLite last night?

Another suggestion:
Include a (mostly invisible, perhaps hard to recognize) logout hyperlink
on every page that immediately invalidates the session if it is
followed. Users will not see it and not be bothered by it, bots will
stumble upon it.

-- 
Groet, Cordialement, Pozdrawiam, Regards,

Kees Nuyt

___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] Help improve bot exclusion

2012-10-30 Thread Kees Nuyt
[Default] On Tue, 30 Oct 2012 10:13:47 -0500, Nolan Darilek
no...@thewordnerd.info wrote:

 And, most importantly, don't sacrifice accessibility in the name of 
 excluding bots. Mouseover links are notoriously inaccessible. Same with 
 only adding href on focus via JS rather than on page load. If I tab 
 through a page, that would seem to break keyboard navigation.

I agree.
I should have been more explicit: run the script when body gets focus,
not per hyperlink.

-- 
Groet, Cordialement, Pozdrawiam, Regards,

Kees Nuyt

___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] Help improve bot exclusion

2012-10-30 Thread Steve Havelka
My guess is that you don't really want to filter out bots, specifically,
but really anyone who's attempting to hit every link Fossil makes--that
is to say, it's the behavior that we're trying to stop here, not the actor.

I suppose what I'd do is set up a mechanism to detect when the remote
user is pulling down data too quickly to be a bot/non-abusive person,
and when Fossil detects that, send back a blank Whoa, nellie!  slow
down, human! page for a minute or five.

I'd allow the user to configure two thresholds, number of pages per
second to trigger this, and number of seconds within a five-minute
window that the number of pages per seconds threshold is exceeded. 
I'd give them defaults of 3 pages per second and 3 times in five
minutes.  So, for example, if a user hits 3 links in one second, which
can happen if you know exactly where you're going and the repository
loads quickly, it's ok the first time, even the second, but the third
time, it locks you out of the web interface for a little while.

Command-line stuff, like cloning/push/pull actions, ought to remain
accessible under all circumstances, regardless of the activity on the
web UI.

What do you think?



On 10/30/2012 03:17 AM, Richard Hipp wrote:
 A Fossil website for a project with a few thousand check-ins can have
 a lot of hyperlinks.  If a spider or bot starts to walk that site, it
 will visit literally hundreds of thousand or perhaps millions of
 pages, many of which are things like vdiff and annotate which are
 computationally expensive to generate or like zip or tarball which
 give multi-megabyte replies.  If you get a lot of bots walking a
 Fossil site, it can really load down the CPU and run up bandwidth charges.

 To prevent this, Fossil uses bot-exclustion techniques.  First it
 looks at the USER_AGENT string in the HTTP header and uses that to
 distinguish bots from humans.  Of course, a USER_AGENT string is
 easily forged, but most bots are honest about who they are so this is
 a good initial filter.  (The undocumented fossil test-ishuman
 command can be used to experiment with this bot discriminator.)

 The second line of defense is that hyperlinks are disabled in the
 transmitted HTML.  There is no href= attribute on the a tags.  The
 href= attributes are added by javascript code that runs after the page
 has been loaded.  The idea here is that a bot can easily forge a
 USER_AGENT string, but running javascript code is a bit more work and
 even malicious bots don't normally go to that kind of trouble.

 So, then, to walk a Fossil website, an agent has to (1) present a
 USER_AGENT string from a known friendly web browser and (2) interpret
 Javascript.

 This two-phase defense against bots is usually effective.  But last
 night, a couple of bots got through on the SQLite website.  No great
 damage was done as we have ample bandwidth and CPU reserves to handle
 this sort of thing.  Even so, I'd like to understand how they got
 through so that I might improve Fossil's defenses.

 The first run on the SQLite website originated in Chantilly, VA and
 gave a USER_AGENT string as follows:

 Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64;
 Trident/5.0; SLCC2; .NET_CLR 2.0.50727; .NET_CLR 3.5.30729; .NET_CLR
 3.0.30729; Media_Center_PC 6.0; .NET4.0C; WebMoney_Advisor; MS-RTC_LM_8)

 The second run came from Berlin and gives this USER_AGENT:

 Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)

 Both sessions started out innocently.  The logs suggest that there
 really was a human operator initially.  But then after about 3 minutes
 of normal browsing, each session starts downloading every hyperlink
 in sight at a rate of about 5 to 10 pages per second.  It is as if the
 user had pressed a Download Entire Website button on their browser. 
 Question:  Is there such a button in IE?

 Another question:  Are significant numbers of people still using IE6
 and IE7?  Could we simply change Fossil to consider IE prior to
 version 8 to be a bot, and hence not display any hyperlinks until the
 user has logged in?

 Yet another question:  Is there any other software on Windows that I
 am not aware of that might be causing the above behaviors?  Are there
 plug-ins or other tools for IE that will walk a website and download
 all its content?

 Finally: Do you have any further ideas on how to defend a Fossil
 website against runs such as the two we observed on SQLite last night?

 Tnx for the feedback
 -- 
 D. Richard Hipp
 d...@sqlite.org mailto:d...@sqlite.org


 ___
 fossil-users mailing list
 fossil-users@lists.fossil-scm.org
 http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] sync bug in 1.24

2012-10-30 Thread Richard Hipp
On Tue, Oct 30, 2012 at 12:20 PM, Barak A. Pearlmutter ba...@cs.nuim.iewrote:

 I installed the pre-compiled fossil 1.24 as /usr/local/bin/fossil and
 then on my server ran fossil rebuild; fossil serve in the repo
 directory and fossil rebuild on the client, in the repo.  Both on my
 own copy of the main fossil repo, with debian packaging info in branch
 debian and a few extra tags.  Then I got the below scary message,
 and (obviously) a failure to fully sync.


Is your server running an earlier version of Fossil?  I think this problem
has been fixed.  Either that or you've found a new problem with the same
symptoms   Please double-check your server version and let me know.
Tnx.



 I'll leave the server running, so you can access for yourself.

 (Tried to open a ticket, but the right links seem to have vanished.
 Would not CC, but I note my earlier messages to fossil-users did not
 appear on the archives, so I assume some filter is blocking me.)

 --Barak.
 --
 Barak A. Pearlmutter
  Hamilton Institute  Dept Comp Sci, NUI Maynooth, Co. Kildare, Ireland
  http://www.bcl.hamilton.ie/~barak/

 

 $ /usr/local/bin/fossil version
 This is fossil version 1.24 [8d758d3715] 2012-10-22 12:48:04 UTC

 $ /usr/local/bin/fossil sync
 http://barak:x...@cvs.bcl.hamilton.ie:8080
 Bytes  Cards  Artifacts Deltas
 Sent: 908 17  0  0
 Received:5618120  0  0
 Sent:  116975 17 15 96
 /usr/local/bin/fossil: server replies with HTML instead of fossil sync
 protocol:
 gimme ee2d352b3e23c80eaf4bfa5465be546b12db6e1b
 igot 0523121a3818ed65b23f8f5e452f7fd68fe67b37
 igot 0697467f839265af0c49c7d5363e7f740e8c8404
 igot 0ff3a17a3b81e16c63dc407211936d4aa4669c94
 igot 110c83111b2118b30c401af594c86fe4ec89c8d1
 igot 2b75b303a5ad9de6195f0866477380f8c930d5e8
 igot 3fa552fab44471ac7233caf097eae06b5d1ffd99
 igot 49dfb01dd93ea69e2ddaa136620ad82d80e7af33
 igot 54332f0aa57130c062f47fe7b2894d99061b07bf
 igot 6ad89120ccdbf4da9fbaecf4714a93c60fc3b549
 igot 931ff23e942be884f1709180a792f90683b6afff
 igot c1efb4175c35877da2c7fc22e9e9ce7e98b806e7
 igot cd1fc5f28b1b92759cc714b919e201f0a43663d7
 igot f60a86d0f2339bc2e33d0058fed91f601319a306
 igot f9377e314ebe3c33c63aa427296e226473b9c266
 # timestamp 2012-10-30T15:58:12
 p class=generalErrorinfinite loop in DELTA table/p
 Received: 779 17  0  0
 Total network traffic: 43355 bytes sent, 3866 bytes received




-- 
D. Richard Hipp
d...@sqlite.org
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] fossil all rebuild seg faults

2012-10-30 Thread James Turner
With the latest fossil trunk (This is fossil version 1.24 [bdbe6c74b8]
2012-10-30 18:14:27 UTC) fossil all rebuild is seg faulting for me.

fossil all rebuild
Segmentation fault (core dumped)

gdb is showing the below:

#0  collect_arguments (zArg=0x7f7f Address 0x7f7f out
of bounds) at allrepo.c:61
61  allrepo.c: No such file or directory.
in allrepo.c

-- 
James Turner
ja...@calminferno.net
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil all rebuild seg faults

2012-10-30 Thread Richard Hipp
Please try the latest and let me know whether or not the problem is fixed.
Tnx for the report.

On Tue, Oct 30, 2012 at 6:31 PM, James Turner ja...@calminferno.net wrote:

 With the latest fossil trunk (This is fossil version 1.24 [bdbe6c74b8]
 2012-10-30 18:14:27 UTC) fossil all rebuild is seg faulting for me.

 fossil all rebuild
 Segmentation fault (core dumped)

 gdb is showing the below:

 #0  collect_arguments (zArg=0x7f7f Address 0x7f7f out
 of bounds) at allrepo.c:61
 61  allrepo.c: No such file or directory.
 in allrepo.c

 --
 James Turner
 ja...@calminferno.net
 ___
 fossil-users mailing list
 fossil-users@lists.fossil-scm.org
 http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users




-- 
D. Richard Hipp
d...@sqlite.org
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil all rebuild seg faults

2012-10-30 Thread James Turner
Looks good. fossil all rebuild is working for me again. If it helps
explain anything, I'm running OpenBSD.

On Tue, Oct 30, 2012 at 08:11:53PM -0400, Richard Hipp wrote:
 Please try the latest and let me know whether or not the problem is fixed.
 Tnx for the report.
 
 On Tue, Oct 30, 2012 at 6:31 PM, James Turner ja...@calminferno.net wrote:
 
  With the latest fossil trunk (This is fossil version 1.24 [bdbe6c74b8]
  2012-10-30 18:14:27 UTC) fossil all rebuild is seg faulting for me.
 
  fossil all rebuild
  Segmentation fault (core dumped)
 
  gdb is showing the below:
 
  #0  collect_arguments (zArg=0x7f7f Address 0x7f7f out
  of bounds) at allrepo.c:61
  61  allrepo.c: No such file or directory.
  in allrepo.c
 
  --
  James Turner
  ja...@calminferno.net
 
 -- 
 D. Richard Hipp
 d...@sqlite.org

-- 
James Turner
ja...@calminferno.net
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil all rebuild seg faults

2012-10-30 Thread Steve Bennett
On 31/10/2012, at 10:11 AM, Richard Hipp wrote:

 Please try the latest and let me know whether or not the problem is fixed.  
 Tnx for the report.

Regarding your latest commit, I've run across this on 64 bit too.
The problem is the '0' at the end of the variable args.
Use NULL instead, otherwise you only get a 32 bit zero value instead of 64 bit.

Cheers,
Steve

 
 On Tue, Oct 30, 2012 at 6:31 PM, James Turner ja...@calminferno.net wrote:
 With the latest fossil trunk (This is fossil version 1.24 [bdbe6c74b8]
 2012-10-30 18:14:27 UTC) fossil all rebuild is seg faulting for me.
 
 fossil all rebuild
 Segmentation fault (core dumped)
 
 gdb is showing the below:
 
 #0  collect_arguments (zArg=0x7f7f Address 0x7f7f out
 of bounds) at allrepo.c:61
 61  allrepo.c: No such file or directory.
 in allrepo.c
 
 --
 James Turner
 ja...@calminferno.net
 ___
 fossil-users mailing list
 fossil-users@lists.fossil-scm.org
 http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
 
 
 
 -- 
 D. Richard Hipp
 d...@sqlite.org
 ___
 fossil-users mailing list
 fossil-users@lists.fossil-scm.org
 http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

--
Embedded Systems Specialists - http://workware.net.au/
WorkWare Systems Pty Ltd
W: www.workware.net.au  P: +61 434 921 300
E: ste...@workware.net.au   F: +61 7 3391 6002






___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] sync bug in 1.24

2012-10-30 Thread Richard Hipp
On Tue, Oct 30, 2012 at 6:18 PM, Barak A. Pearlmutter ba...@cs.nuim.iewrote:

  Is your server running an earlier version of Fossil?

 As I said,

  I installed the pre-compiled fossil 1.24 as /usr/local/bin/fossil
  and then on my server ran fossil rebuild; fossil serve in the repo
  directory and fossil rebuild on the client, in the repo.  Both on

 I was careful to only use the precompiled 1.24 binary downloaded from
 fossil-scm in all the above, on both server and client.


Please try for me (on the server):

 fossil pull http://www.fossil-scm.org/fossil

Then:

 fossil test-integrity

Also, just to be certain:

 which fossil

Thanks.  Sorry for the problems.




 (I would give you access to the client repo too, but it is only
 first-class on IPv6, and accesses IPv4 via NAT.)

 --Barak.




-- 
D. Richard Hipp
d...@sqlite.org
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil all rebuild seg faults

2012-10-30 Thread Richard Hipp
On Tue, Oct 30, 2012 at 8:18 PM, Steve Bennett ste...@workware.net.auwrote:

 On 31/10/2012, at 10:11 AM, Richard Hipp wrote:

 Please try the latest and let me know whether or not the problem is
 fixed.  Tnx for the report.


 Regarding your latest commit, I've run across this on 64 bit too.
 The problem is the '0' at the end of the variable args.
 Use NULL instead, otherwise you only get a 32 bit zero value instead of 64
 bit.


I bet you're right.  I've previously made the same mistake using using
Tcl_AppendResult()



 Cheers,
 Steve


 On Tue, Oct 30, 2012 at 6:31 PM, James Turner ja...@calminferno.netwrote:

 With the latest fossil trunk (This is fossil version 1.24 [bdbe6c74b8]
 2012-10-30 18:14:27 UTC) fossil all rebuild is seg faulting for me.

 fossil all rebuild
 Segmentation fault (core dumped)

 gdb is showing the below:

 #0  collect_arguments (zArg=0x7f7f Address 0x7f7f out
 of bounds) at allrepo.c:61
 61  allrepo.c: No such file or directory.
 in allrepo.c

 --
 James Turner
 ja...@calminferno.net
 ___
 fossil-users mailing list
 fossil-users@lists.fossil-scm.org
 http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users




 --
 D. Richard Hipp
 d...@sqlite.org
 ___
 fossil-users mailing list
 fossil-users@lists.fossil-scm.org
 http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


--
 Embedded Systems Specialists - http://workware.net.au/
 WorkWare Systems Pty Ltd
 W: www.workware.net.au  P: +61 434 921 300
 E: ste...@workware.net.au   F: +61 7 3391 6002







 ___
 fossil-users mailing list
 fossil-users@lists.fossil-scm.org
 http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users




-- 
D. Richard Hipp
d...@sqlite.org
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil all rebuild seg faults

2012-10-30 Thread James Turner
On Tue, Oct 30, 2012 at 08:22:25PM -0400, Richard Hipp wrote:
 On Tue, Oct 30, 2012 at 8:18 PM, Steve Bennett ste...@workware.net.auwrote:
 
  On 31/10/2012, at 10:11 AM, Richard Hipp wrote:
 
  Please try the latest and let me know whether or not the problem is
  fixed.  Tnx for the report.
 
 
  Regarding your latest commit, I've run across this on 64 bit too.
  The problem is the '0' at the end of the variable args.
  Use NULL instead, otherwise you only get a 32 bit zero value instead of 64
  bit.
 
 
 I bet you're right.  I've previously made the same mistake using using
 Tcl_AppendResult()
 
 

Yeah I'm running amd64, so definitely 64bit over here.

 
  Cheers,
  Steve
 
 
  Embedded Systems Specialists - http://workware.net.au/
  WorkWare Systems Pty Ltd
  W: www.workware.net.au  P: +61 434 921 300
  E: ste...@workware.net.au   F: +61 7 3391 6002
 
 
 -- 
 D. Richard Hipp
 d...@sqlite.org

-- 
James Turner
ja...@calminferno.net
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users