Re: [fossil-users] fossil --repolist showing no repositories

2017-12-21 Thread Andy Bradford
Thus said "dewey.hyl...@gmail.com" on Thu, 21 Dec 2017 13:40:36 -0500:

> * fossil does  serve both a repo  file and a directory  if these files
> are copied to a different local directory.

Unless  things have  changed, it  is  generally not  recommended to  run
Fossil on a non-local filesystem.

Andy
-- 
TAI64 timestamp: 40005a3c8d71


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil --repolist showing no repositories

2017-12-21 Thread dewey.hyl...@gmail.com
This is something I've not thought of - and I think this is how the fossil 
source
itself is propagated to its official mirrors. I don't know why this didn't occur
to me, unless it is simply an instance of:
"When you are a hammer, everything is a nail."
And I've been looking at container-based replication (eg. docker swarm mode).

At any rate, my comment regarding fossil not being the only service in the 
category
was related apps which were difficult to separate from their data. The fossil
front-end can't be separated from the data because the front-end and back-end 
are
the same. A mysql database server cannot be separated from its data, but any of 
the
front-ends could be - because they are built to obtain their data over the 
network.

In the end, I hate that I spent two days on something which I should have 
logically
mapped around before I got started - but I've learned some, so it wasn't all 
wasted
time. I'll look at using fossil to provide the replication for itself and move 
on
to the next service.

Thanks again for your insight.

- On Dec 21, 2017, at 4:10 PM, Warren Young war...@etr-usa.com wrote:

> On Dec 21, 2017, at 1:00 PM, dewey.hyl...@gmail.com wrote:
>> 
>> That's where the NAS and sshfs came into play.
> 
> You seem to be trying to use containers and such to provide distributed 
> service,
> but Fossil already does that: it’s a DVCS.  There’s no one telling you it must
> live in only one place.
> 
> Therefore:
> 
> Option 1: Run the container anywhere you like, but with its internal Fossil
> storing to the container’s view of the host OS, not to some other machine over
> a network file system.  Then from another computer, clone that repository onto
> the SAN or NAS.  Periodically, run a sync.  Now your repo is both in the
> container and on the NAS/SAN.
> 
> Option 2: If the NAS permits, run a Fossil instance there.  Clone it into the
> container for actual use.  Whether syncs mostly go first to the container or 
> to
> the NAS and are then pushed to the other doesn’t much matter.  Again, think
> distributed.
> 
> Either way, Fossil get a local real POSIX-compliant file system for SQLite, 
> and
> uses its own sync protocol for inter-host operations, which means that SQLite
> transactions end up avoiding the need to worry about network unreliability.
> The clone/push will either complete successfully or it will be wholly rolled
> back to the prior safe state.
> 
>> I'm sure fossil won't be the only service
>> in that category - I just started with fossil because I *thought* it would 
>> be an
>> easy win.
> 
> Any DBMS is going to have problems with sshfs.  It’s not something special to
> SQLite.
> 
> If you mean to reference VCSes competing with Fossil, it’s at best a “push” 
> (in
> the poker sense) when it comes to networked data reliability with your current
> storage design, simply because reliable storage in the face of multiple 
> writers
> requires correct locking.
> 
> Switching to another VCS may even make things worse.  Fossil looks like it is
> causing problems here, but only because it’s trying to do things in an
> ACID-compliant fashion, where other systems might not even try, and so 
> *appear*
> to be less problem-free.
> 
> I’ve tried researching Git ACID compliance and concurrency, and all I see is 
> raw
> speculation, no hard claims from people who actually know what they’re talking
> about, and have battle-tested it.
> 
> SQLite, by contrast, is very well known to be a durable data store…*if* you 
> put
> it on a filesystem with correct locking semantics!
> ___
> fossil-users mailing list
> fossil-users@lists.fossil-scm.org
> http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil --repolist showing no repositories

2017-12-21 Thread Warren Young
On Dec 21, 2017, at 1:00 PM, dewey.hyl...@gmail.com wrote:
> 
> That's where the NAS and sshfs came into play.

You seem to be trying to use containers and such to provide distributed 
service, but Fossil already does that: it’s a DVCS.  There’s no one telling you 
it must live in only one place.

Therefore:

Option 1: Run the container anywhere you like, but with its internal Fossil 
storing to the container’s view of the host OS, not to some other machine over 
a network file system.  Then from another computer, clone that repository onto 
the SAN or NAS.  Periodically, run a sync.  Now your repo is both in the 
container and on the NAS/SAN.

Option 2: If the NAS permits, run a Fossil instance there.  Clone it into the 
container for actual use.  Whether syncs mostly go first to the container or to 
the NAS and are then pushed to the other doesn’t much matter.  Again, think 
distributed.

Either way, Fossil get a local real POSIX-compliant file system for SQLite, and 
uses its own sync protocol for inter-host operations, which means that SQLite 
transactions end up avoiding the need to worry about network unreliability.  
The clone/push will either complete successfully or it will be wholly rolled 
back to the prior safe state.

> I'm sure fossil won't be the only service
> in that category - I just started with fossil because I *thought* it would be 
> an
> easy win.

Any DBMS is going to have problems with sshfs.  It’s not something special to 
SQLite.

If you mean to reference VCSes competing with Fossil, it’s at best a “push” (in 
the poker sense) when it comes to networked data reliability with your current 
storage design, simply because reliable storage in the face of multiple writers 
requires correct locking.

Switching to another VCS may even make things worse.  Fossil looks like it is 
causing problems here, but only because it’s trying to do things in an 
ACID-compliant fashion, where other systems might not even try, and so *appear* 
to be less problem-free.

I’ve tried researching Git ACID compliance and concurrency, and all I see is 
raw speculation, no hard claims from people who actually know what they’re 
talking about, and have battle-tested it.

SQLite, by contrast, is very well known to be a durable data store…*if* you put 
it on a filesystem with correct locking semantics!
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil --repolist showing no repositories

2017-12-21 Thread Richard Hipp
On 12/21/17, dewey.hyl...@gmail.com  wrote:
> 1268vfile_scan(&base, blob_size(&base), 0, 0, 0);
> (gdb) n

You need to step into vfile_scan() (using "s" instead of "n") because
that is where all the interesting stuff happens.  There is a loop
inside vfile_scan() that uses readdir() to read out the contents of a
directory and store each filename into the SFILE temp table of the
database.  You need to figure out for us why it is that the *.fossil
files are not being discovered by readdir(), or why they are not
making it into the SFILE table.  This will require some C-debugging
skills.

Nobody else can do this for you, because nobody else is able to
reproduce your problem.

-- 
D. Richard Hipp
d...@sqlite.org
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil --repolist showing no repositories

2017-12-21 Thread dewey.hyl...@gmail.com
Ugh, that is bad news. I was interested in the ssh method for ease of use as 
well
as its encrypted communications ... I suppose nfs is another possibility, but 
I'm
not a fan for several reasons. And maybe iscsi, if 

Here is what I'm currently attempting to accomplish:

We have redundant storage options (SAN, NAS) to leverage. These are largely 
static
services whose hardware rarely changes. Those get backed up and some of them are
also replicated off-site for DR purposes. I am told to trust the storage, and 
find
ways to make our apps and services more redundant.

We are using containers in docker swarm mode for other apps, so I was trying to
find a way to fit fossil into that picture. Your load balancer points to all
backend swarm nodes. One instance of fossil service can be fired up on any node,
and all nodes redirect traffic to the correct location. If the node running 
fossil
goes away, an instance is started elsewhere. For this to work, of course, I need
shared storage. That's where the NAS and sshfs came into play.

I've been using fossil in a container for quite a long while, just not in swarm
mode - so the app container is not decoupled from the data, which makes failing
over to another node quite a bit more difficult. I do understand fossil uses
sqlite, which of course is *not* a network-friendly database such as mysql. Its
simplicity is why I love it, and why I will stick with it. I'm just hoping to 
find
a solution to my need that is simple - sshfs would have been great in that 
regard.

Any other simple solution would be welcomed. Otherwise, I'll just have to stick
with the tried-and-true backup and restore method, and not being able to move 
the
fossil service around very easily. But I'm sure fossil won't be the only service
in that category - I just started with fossil because I *thought* it would be an
easy win.

Thanks for your insight.

- On Dec 21, 2017, at 2:02 PM, Warren Young war...@etr-usa.com wrote:

> On Dec 21, 2017, at 11:40 AM, dewey.hyl...@gmail.com wrote:
>> 
>>  ranch2@10.1.51.120:fossils on /fossils type fuse.sshfs
>>  (rw,nosuid,nodev,relatime,user_id=0,group_id=0)
> 
> Running SQLite — upon which Fossil is based — over sshfs is a bad idea.  The
> current implementation doesn’t even try to implement the file locking
> operation:
> 
>https://github.com/libfuse/sshfs/blob/master/sshfs.c#L3304
> 
> That structure definition would have to include the “.lock” member for the 
> sort
> of locking SQLite does for this to be safe.
> 
> See also:
> 
>
> https://www.sqlite.org/howtocorrupt.html#_filesystems_with_broken_or_missing_lock_implementations
> 
> There are other network file systems with locking semantics, but it’s often
> optional, and when present and enabled, it is often buggy.
> 
> Here’s hoping you have some other option.  Best would be to store the 
> *.fossils
> inside the container.  Second best would be to map a [virtual] block device
> into the container and put a normal Linux file system on it.
> ___
> fossil-users mailing list
> fossil-users@lists.fossil-scm.org
> http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil --repolist showing no repositories

2017-12-21 Thread dewey.hyl...@gmail.com
New host: CentOS 7
Running directly on the host, with repositories over sshfs as before:

ranch2@10.1.51.120:fossils on /fossils type fuse.sshfs 
(rw,nosuid,nodev,relatime,user_id=0,group_id=0)

I'm not a C dev, so I'm largely unfamiliar with this type of debugging;
let me know if I need to handle this differently somehow. For better
readability, I've also posted the debug output here:

  https://paste.pound-python.org/show/QWBnPqc953rXnFK4pxOg/

```
[root@localhost fossil-2.4]# gdb ./fossil   


   [45/2179]
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-100.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
...
Reading symbols from /root/fossil-2.4/fossil...done.
(gdb) break repo_list_page
Breakpoint 1 at 0x45c1ce: file ./src/main.c, line 1240.
(gdb) run http --repolist /fossils 
(gdb) n
1278  n = db_int(0, "SELECT count(*) FROM sfile");
(gdb) n
1279  if( n>0 ){
(gdb) n
1308@ No Repositories Found
(gdb) n
1310  @ 
(gdb) n
1312  cgi_reply();
(gdb) n
HTTP/1.0 200 OK
Date: Thu, 21 Dec 2017 19:02:47 GMT
Connection: close
X-UA-Compatible: IE=edge
X-Frame-Options: SAMEORIGIN
Cache-control: no-cache
Content-Type: text/html; charset=utf-8
Content-Length: 132



http:///"; />
Repository List


No Repositories Found


1313  sqlite3_close(g.db);
(gdb) n
1314  g.db = 0;
(gdb) n
1315  return n;
(gdb) n
1316}
(gdb) n
process_one_web_page (zNotFound=0x0, pFileGlob=0x0, allowRepoList=1) at 
./src/main.c:1497
1497}else if( zNotFound ){
(gdb) n
1506  @ Not Found
(gdb) n
1507  cgi_set_status(404, "not found");
(gdb) n
1508  cgi_reply();
(gdb) n
HTTP/1.0 404 not found
Date: Thu, 21 Dec 2017 19:03:14 GMT
Connection: close
X-UA-Compatible: IE=edge
X-Frame-Options: SAMEORIGIN
Cache-control: no-cache
Content-Type: text/html; charset=utf-8
Content-Length: 151



http:///"; />
Repository List


No Repositories Found


Not Found
1510return;
(gdb) n
1706}
(gdb) n
cmd_http () at ./src/main.c:2226
2226}
(gdb) n
main (argc=4, argv=0x7fffe678) at ./src/main.c:768
768   fossil_exit(0);
(gdb) n
[Inferior 1 (process 12740) exited normally]
(gdb) q
[root@localhost fossil-2.4]# ls -alh /fossils/
total 1.6M
drwxr-xr-x   1 foouser foouser8 Dec 21 14:12 .
dr-xr-xr-x. 18 rootroot 259 Dec 21 13:55 ..
-rw-r--r--   1 foouser foouser 272K Dec 21 11:15 archsetup.fossil
-rw-r--r--   1 foouser foouser 224K Dec 21 11:15 guac-install-script.fossil
-rw-r--r--   1 foouser foouser 224K Dec 21 11:15 miscreports.fossil
-rwxrwxrwx   1 foouser foouser 308K Dec 21 14:12 pkgReport.fossil
-rw-r--r--   1 foouser foouser 596K Dec 21 11:31 servercfg.fossil

```

- On Dec 21, 2017, at 1:40 PM, dewey hylton dewey.hyl...@gmail.com wrote:

> Unfortunately, my first stab at this failed:
> 
> (gdb) break repo_list_page
> Breakpoint 1 at 0x7ccc0: file ./src/main.c, line 1238.
> (gdb) run http --repolist /fossils  Starting program: /root/fossil-2.4/fossil http --repolist /fossils  warning: Error disabling address space randomization: Operation not permitted
> Could not trace the inferior process.
> Error: ptrace: Operation not permittedDuring startup program exited with code
> 127.
> (gdb)
> 
> This was done in the same container which is failing as previously described.
> The host is running rancheros, which is a very stripped down and highly
> customized distribution meant specifically for hosting docker containers. As
> such, I'll have to find a different host and distribution to test this 
> against.
> 
> Things I have learned thus far:
> * fossil fails in a similar way when serving a specific repository file. In 
> this
>  case, the web page presents "Not Found" or something similar.
> * fossil appears to work properly on this mountpoint when not serving; i can
> open
>  one of the repository files, edit, commit, sync back across the network, etc.
> * fossil does serve both a repo file and a directory if these files are copied
>  to a different local directory.
> 
> The filesystem mount looks like this:
> 
>  ranch2@10.1.51.120:fossils on /fossils type fuse.sshfs
>  (rw,nosuid,nodev,relatime,user_id=0,group_id=0)
> 
> 
> So ... I'll continue by attempting the same type of thing on a different and
> more complete distribution, outside of a container, so that gdb will work if
> fossil fails there as well. But hopefully someone

Re: [fossil-users] fossil --repolist showing no repositories

2017-12-21 Thread Warren Young
On Dec 21, 2017, at 11:40 AM, dewey.hyl...@gmail.com wrote:
> 
>  ranch2@10.1.51.120:fossils on /fossils type fuse.sshfs 
> (rw,nosuid,nodev,relatime,user_id=0,group_id=0)

Running SQLite — upon which Fossil is based — over sshfs is a bad idea.  The 
current implementation doesn’t even try to implement the file locking operation:

https://github.com/libfuse/sshfs/blob/master/sshfs.c#L3304

That structure definition would have to include the “.lock” member for the sort 
of locking SQLite does for this to be safe.

See also:


https://www.sqlite.org/howtocorrupt.html#_filesystems_with_broken_or_missing_lock_implementations

There are other network file systems with locking semantics, but it’s often 
optional, and when present and enabled, it is often buggy.

Here’s hoping you have some other option.  Best would be to store the *.fossils 
inside the container.  Second best would be to map a [virtual] block device 
into the container and put a normal Linux file system on it.
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil --repolist showing no repositories

2017-12-21 Thread dewey.hyl...@gmail.com
Unfortunately, my first stab at this failed:

(gdb) break repo_list_page
Breakpoint 1 at 0x7ccc0: file ./src/main.c, line 1238.
(gdb) run http --repolist /fossils  On 12/20/17, dewey.hyl...@gmail.com  wrote:
>> Would someone help me understand what I'm seeing here? I expect a list of
>> repositories
>> in the web page output, but am told there are none.
> 
> I don't understand it either.
> 
> To debug, recompile Fossil with -g and -O0.  Create a HTTP request
> text file like this:
> 
>  GET / HTTP/1.0\n
>  \n
> 
> (The first line is "GET /" and the second line is blank.  The \r
> characters normally required by HTTP are optional.) Then run fossil in
> gdb.  Set a breakpoint on the routine repo_list_page and run this
> command:
> 
>run http --repolist /fossils  
> Single step through the repolist routine and try to figure out why it
> is not finding your repository files.  Please let us know what you
> discover.
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> fossil-users mailing list
> fossil-users@lists.fossil-scm.org
> http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil --repolist showing no repositories

2017-12-21 Thread Warren Young
On Dec 20, 2017, at 10:24 PM, Andy Bradford  wrote:
> 
> Thus said Warren Young on Wed, 20 Dec 2017 21:02:01 -0700:
> 
>> Linux  containers  aren't  foolproof   when  it  comes  to  permission
>> isolation. Better  to not  let Fossil  have root  privs even  inside a
>> container.
> 
> Fossil  does chroot  first  and  then drop  root  privileges which  then
> changes to  the user that owns  the directory of fossils  (or the fossil
> repository if serving only one).

If you’re running a privileged container, all you then need is a local root 
escalation, one of which pops up roughly every year.

If you’re using an *unprivileged* container, you may be fine, though I don’t 
know if those will allow the host-side port 80 to be bound to the container.

https://linuxcontainers.org/pt_br/lxc/security/

Another thought: perhaps SELinux or AppArmor is interfering here?  Try turning 
the one your host OS runs off temporarily.  If it’s SELinux, set it to 
permissive mode and then use audit2allow to build a policy that will fix the 
problem:

https://wiki.centos.org/HowTos/SELinux
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil --repolist showing no repositories

2017-12-20 Thread Andy Bradford
Thus said Dewey Hylton on Wed, 20 Dec 2017 20:23:23 -0500:

> All users have read/write permissions  on those files, so this doesn't
> make sense (to me) from a Unix permissions standpoint.

As Warren asked, what are the permissions on the directory that contains
the  Fossils? Not  only  does Fossil  need  access to  the  files it  is
serving, but it needs write access to  the directory as the user that it
assumes after having chrooted and dropped root privileges because SQLITE
will create additional files when the Fossil is used.

Andy
-- 
TAI64 timestamp: 40005a3b46fa


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil --repolist showing no repositories

2017-12-20 Thread Andy Bradford
Thus said Warren Young on Wed, 20 Dec 2017 21:02:01 -0700:

> Linux  containers  aren't  foolproof   when  it  comes  to  permission
> isolation. Better  to not  let Fossil  have root  privs even  inside a
> container.

Fossil  does chroot  first  and  then drop  root  privileges which  then
changes to  the user that owns  the directory of fossils  (or the fossil
repository if serving only one).

Andy
-- 
TAI64 timestamp: 40005a3b45c6


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil --repolist showing no repositories

2017-12-20 Thread Warren Young
On Dec 20, 2017, at 6:23 PM, Dewey Hylton  wrote:
> 
> All users have read/write permissions on those files, so this doesn’t make 
> sense (to me) from a Unix permissions standpoint. 

Fine, but what about the directory that holds these files?  That’s why I 
applied the command to the directory as well.

> the fossil binary is running as the root user in this case. 

If that’s only to allow binding to port 80, you might want to put a proxy up in 
front of it instead.  That’ll let you do TLS, if nothing else.

Linux containers aren’t foolproof when it comes to permission isolation.  
Better to not let Fossil have root privs even inside a container.
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil --repolist showing no repositories

2017-12-20 Thread Richard Hipp
On 12/20/17, dewey.hyl...@gmail.com  wrote:
> Would someone help me understand what I'm seeing here? I expect a list of
> repositories
> in the web page output, but am told there are none.

I don't understand it either.

To debug, recompile Fossil with -g and -O0.  Create a HTTP request
text file like this:

  GET / HTTP/1.0\n
  \n

(The first line is "GET /" and the second line is blank.  The \r
characters normally required by HTTP are optional.) Then run fossil in
gdb.  Set a breakpoint on the routine repo_list_page and run this
command:

run http --repolist /fossils http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil --repolist showing no repositories

2017-12-20 Thread Dewey Hylton
Oh, and THANK YOU for responding. 

> On Dec 20, 2017, at 5:54 PM, Warren Young  wrote:
> 
>> On Dec 20, 2017, at 3:40 PM, dewey.hyl...@gmail.com wrote:
>> 
>> # ls -lh /fossils|grep fossil
>> -rw-rw-rw-1 1000 root  272.0K Dec 19 14:37 archsetup.fossil
>> -rw-rw-rw-1 1000 root  224.0K Dec 19 14:36 
>> guac-install-script.fossil
>> -rw-rw-rw-1 1000 root  224.0K Dec 19 14:37 miscreports.fossil
>> -rw-rw-rw-1 1000 root  304.0K Dec 19 14:37 pkgReport.fossil
> 
> # chown -R arealuser /fossils
> 
> User 1000 doesn’t exist on your system, so those files are unreadable by 
> Fossil, which isn’t running as root.
> 
> Bonus guess: you scp’d or rsync’d this over to a BSD box from a Linux box 
> with permission preservation, where the Linux box starts normal users at UID 
> 1000 and the BSD box starts at 500.
> ___
> fossil-users mailing list
> fossil-users@lists.fossil-scm.org
> http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil --repolist showing no repositories

2017-12-20 Thread Dewey Hylton
All users have read/write permissions on those files, so this doesn’t make 
sense (to me) from a Unix permissions standpoint. 

I am indeed a BSD guy, but ... in reality fossil is running in a docker 
container on a Linux server and accessing the files via sshfs mount. I can futz 
about and make the UIDs match, but unless fossil itself is making decisions 
based on UID I don’t understand the point. 

I haven’t looked at the code and understand fossil may be dropping permissions 
at some point, but the fossil binary is running as the root user in this case. 

> On Dec 20, 2017, at 5:54 PM, Warren Young  wrote:
> 
>> On Dec 20, 2017, at 3:40 PM, dewey.hyl...@gmail.com wrote:
>> 
>> # ls -lh /fossils|grep fossil
>> -rw-rw-rw-1 1000 root  272.0K Dec 19 14:37 archsetup.fossil
>> -rw-rw-rw-1 1000 root  224.0K Dec 19 14:36 
>> guac-install-script.fossil
>> -rw-rw-rw-1 1000 root  224.0K Dec 19 14:37 miscreports.fossil
>> -rw-rw-rw-1 1000 root  304.0K Dec 19 14:37 pkgReport.fossil
> 
> # chown -R arealuser /fossils
> 
> User 1000 doesn’t exist on your system, so those files are unreadable by 
> Fossil, which isn’t running as root.
> 
> Bonus guess: you scp’d or rsync’d this over to a BSD box from a Linux box 
> with permission preservation, where the Linux box starts normal users at UID 
> 1000 and the BSD box starts at 500.
> ___
> fossil-users mailing list
> fossil-users@lists.fossil-scm.org
> http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil --repolist showing no repositories

2017-12-20 Thread Warren Young
On Dec 20, 2017, at 3:40 PM, dewey.hyl...@gmail.com wrote:
> 
> # ls -lh /fossils|grep fossil
> -rw-rw-rw-1 1000 root  272.0K Dec 19 14:37 archsetup.fossil
> -rw-rw-rw-1 1000 root  224.0K Dec 19 14:36 
> guac-install-script.fossil
> -rw-rw-rw-1 1000 root  224.0K Dec 19 14:37 miscreports.fossil
> -rw-rw-rw-1 1000 root  304.0K Dec 19 14:37 pkgReport.fossil

# chown -R arealuser /fossils

User 1000 doesn’t exist on your system, so those files are unreadable by 
Fossil, which isn’t running as root.

Bonus guess: you scp’d or rsync’d this over to a BSD box from a Linux box with 
permission preservation, where the Linux box starts normal users at UID 1000 
and the BSD box starts at 500.
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


[fossil-users] fossil --repolist showing no repositories

2017-12-20 Thread dewey.hyl...@gmail.com
Would someone help me understand what I'm seeing here? I expect a list of 
repositories
in the web page output, but am told there are none. I've banged on this long 
enough
to go cross-eyed, so I hope I'm just missing something very simple. I haven't 
found
anything in the code that would cause this other than simply not finding a 
N.fossil
file in the named directory ...


# /usr/local/bin/fossil version
This is fossil version 2.4 [a0001dcf57] 2017-11-03 09:29:29 UTC

# ps wax|grep [f]ossil
   10 root   0:00 /usr/local/bin/fossil server --repolist /fossils

# ls -lh /fossils|grep fossil
-rw-rw-rw-1 1000 root  272.0K Dec 19 14:37 archsetup.fossil
-rw-rw-rw-1 1000 root  224.0K Dec 19 14:36 
guac-install-script.fossil
-rw-rw-rw-1 1000 root  224.0K Dec 19 14:37 miscreports.fossil
-rw-rw-rw-1 1000 root  304.0K Dec 19 14:37 pkgReport.fossil

# curl http://localhost:8080


http://localhost:8080/"; />
Repository List


No Repositories Found



___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users