Re: how to distinguish public vs private (entry-point vs internal) systems

2019-02-06 Thread Jim Newton
> 
> I’m both asking how they should be named, and how to advertise them for 
> programmatic consumption.
> For example, and automatic testing program such as that included in 
> quicklisp, should not try to stand-alone
> load systems which are not designed to work stand-alone. We have to work 
> around this by artificially
> making all systems “work” in standalone enough to fool quicklisp.
> 
> Can you explain the quicklisp constraint? How does it find all systems?
> 
> One simple expedient for this quicklisp issue -- if I understand it correctly 
> -- would be to have a test-op default perform method for all systems that 
> simply succeeds. It should probably by default issue a warning that no "real" 
> test method exists, and that warning should have a particular type so that it 
> can be muffled by quicklisp. Probably also we should allow the programmer of 
> the original system to make a test-op no-op method that emits no warning 
> (because the system is intended not to be testable).
> 


As I understand quicklisp, it ties to compile each system in a top-level sbcl, 
and asserts that that works.  
As far as I know that is the only test it does.  I don’t believe it does 
anything special with test-op.






Re: how to distinguish public vs private (entry-point vs internal) systems

2019-02-06 Thread Jim Newton
I’m both asking how they should be named, and how to advertise them for 
programmatic consumption.
For example, and automatic testing program such as that included in quicklisp, 
should not try to stand-alone
load systems which are not designed to work stand-alone.   We have to work 
around this by artificially
making all systems “work” in standalone enough to fool quicklisp.

quickref is another tool which tries to publish documentation extracted from 
packages, but quickref would
like to skip packages which are not part of the public API, such as test case 
packages which may require
other non-public testing frameworks.

It would be nice if asdf had some declarative way of specifying which systems 
are intended as entry points.
That would also avoid different people relying on non-standard naming 
conventions to encode declarative 
information.


> On 06 Feb 2019, at 15:36, Robert Goldman  wrote:
> 
> On 6 Feb 2019, at 2:22, Jim Newton wrote:
> 
> When creating an lisp application I usually have one (or several) what I call 
> top-level asdf systems
> which advertise the public interface to the application, and I may have 
> several internal systems
> which are used but not intended for public use.
> 
> What is the convention with asdf to distinguish entry-point systems from 
> internal/private
> systems?
> 
> I generally try to use either Faré's "slashy" systems (like "shop2/common") 
> in my work. When I can, it's even better to use a :module which isn't visible 
> at all.
> 
> I think what you are really asking is "how should I name a system that the 
> user should never load directly?" I don't have a great answer to this 
> question.
> 



how to distinguish public vs private (entry-point vs internal) systems

2019-02-06 Thread Jim Newton
When creating an lisp application I usually have one (or several) what I call 
top-level asdf systems
which advertise the public interface to the application, and I may have several 
internal systems
which are used but not intended for public use.  

What is the convention with asdf to distinguish entry-point systems from 
internal/private
systems?





Re: Multiple processing compiling the same file

2018-01-31 Thread Jim Newton
> 
>> For example,what if different hosts want to run the same source code but
>> with different optimization settings?
>> This is a real possibility, as some of my processes are running with
>> profiling (debug 3) and collecting profiling results,
>> and others are running super optimized (speed 3) code to try to find the
>> fastest something-or-other.
>> 
> Then have one output-translations per optimization setting, and
> produce two binaries with different names.
> 

I suspect this would be one binary for each permutation of the optimization
settings used times the number of top-level entry points.  Right?  That number 
is
much larger than the number of /tmp directories I need just to automatically
compile before running.






Re: Multiple processing compiling the same file

2018-01-31 Thread Jim Newton
Hi Faré, 

Thanks for taking the time to understand my comments.  I’ve tried to respond to 
some
of your questions below.  Sorry if my original post wasn’t explicit enough to 
give enough
explanation for what I’m trying to do.


 If I run several sbcl processes on different nodes in my compute cluster, 
 it might happen that
 two different runs notice the same file needs to be recompiled (via asdf),
 and they might try to compile it at the same time.  What is the best way 
 to prevent this?
 
> You mean that this machines share the same host directory? Interesting.
> 

Yes, the cluster shares some disk, and shares home directory.And I believe 
two cores
on the same physical host share the /tmp, but I’m not 100% sure about that.


 
> That's an option. It is expensive, though: it means no sharing of fasl
> files between hosts. If you have cluster of 200 machines, that means
> 200x the disk space.

With regard to the question of efficient reuse of fasl files: this is 
completely irrelevant for my case.   My
code takes hours (10 to 12 hours worst case) to run, but only 20 seconds (or 
less) to compile.  I’m very happy to completely
remove the fasl files and regenerate them before each 10 hour run.  (note to 
self: I need to double check that
I do in fact delete the fasl files every time.)   Besides, my current flow 
allows my simply to git-check-in a change, and
re-lauch the code on the cluster in batch.   I don’t really want to add an 
error-prone manual local-build-and-deploy step
if that can be avoided, unless of course there is some great advantage to that 
approach.

> 
> What about instead building your application as an executable and
> delivering that to the cluster?

One difficulty about your build-then-deliver suggestion is that my local 
machine is running mac-os, and the cluster is
running linux.   I don’t think I can build linux executables on my mac. 


>> 
> You can have different ASDF_OUTPUT_TRANSLATIONS or
> asdf:*output-translations-parameter*
> on each machine, or you can indeed have the user cache depend on
> uiop:hostname and more.
> 

This is what I’ve ended up doing.  And it seems to work.  Here is the code
I have inserted into all my scripts.

(let ((home (directory-namestring (user-homedir-pathname)))
  (uid (sb-posix:getuid))
  (pid  (sb-posix:getpid)))
  (setf asdf::*user-cache* (ensure-directories-exist (format nil "/tmp~A~D/~D/" 
home uid pid




> The Right Thing™ is still to build and test then deploy, rather than
> deploy then build.

In response to your suggestion about build then deploy.  This seems very 
dangerous and error prone to me.
For example,what if different hosts want to run the same source code but with 
different optimization settings?  
This is a real possibility, as some of my processes are running with profiling 
(debug 3) and collecting profiling results,
and others are running super optimized (speed 3) code to try to find the 
fastest something-or-other. 

I don’t even know whether it is possible create the .asd files so that changing 
a optimization declaration will trigger
everything depending on it to be recompiled.  And If I think i’ve written my 
.asd files as such, how would I know
whether they are really correct? 

It is not the case currently, but may very well be in the future that I want 
different jobs in the cluster running different
git branches of my code code.  That would be a nightmare to manage if I try to 
share fasl files.

> Using Bazel, you might even be able to build in parallel on your cluster.

Basel sounds interesting, but I don’t really see the advantage of building in 
parallel when it only
takes a few seconds to build, but half a day to execute.

> I still don't understand why your use case uses deploy-then-build
> rather than build-then-deploy.


I hope it is now clear why I can’t.  (1) local machine is mac-os while cluster 
is linux 
(2) different jobs in cluster are using different optimization settings. (3) 
future enhancement
to have different cluster nodes running different branches of the code.

Kind regards
Jim

Re: Multiple processing compiling the same file

2018-01-23 Thread Jim Newton
Apparently, this approach seems to work.   I’m not sure if it is the best 
approach.  
Here is what my code looks like.  It creates a directory in /tmp/ and 
asdf:load-system
seems to compile the .fasl files into there.


(require :asdf)
(require :sb-posix)
(let ((home (directory-namestring (user-homedir-pathname)))
  (uid (sb-posix:getuid))
  (pid  (sb-posix:getpid)))
  (setf asdf::*user-cache* (ensure-directories-exist (format nil "/tmp~A~D/~D/" 
home uid pid

#-quicklisp
(let ((quicklisp-init
"/lrde/home/jnewton/quicklisp/setup.lisp"))
  (if (probe-file quicklisp-init)
  (load quicklisp-init)
  (error "file not found ~S" quicklisp-init)))
(asdf:load-system :lisp-types-test)


> On 23 Jan 2018, at 12:47, Pascal Bourguignon <p...@informatimago.com> wrote:
> 
> 
> 
>> On 23 Jan 2018, at 12:00, Jim Newton <jnew...@lrde.epita.fr 
>> <mailto:jnew...@lrde.epita.fr>> wrote:
>> 
>> If I run several sbcl processes on different nodes in my compute cluster, it 
>> might happen that two different runs notice the same file needs to be 
>> recompiled (via asdf),
>> and they might try to compile it at the same time.  What is the best way to 
>> prevent this?
>> 
>> I see in the asdf documentation that there is an asdf:*user-cache* variable 
>> whose
>> value is the path name of the directory where asdf compiles into.Would 
>> it be advisable for my to arrange so that asdf:*user-cache* is a function of 
>> the pid and hostname and perhaps thread-id (if such a thing exists) to avoid 
>> such collisions?
>> 
>> Or is there some better way to handle this which is build into asdf?
> 
> I had requested that ASDF includes the hostname (or machine-instance), in the 
> built path for the cache.
> Unfortunately, for some reason, the maintainers of ASDF thought it was a good 
> read to remove it.
> There you are!
> 
> 
> 
> 
> -- 
> __Pascal J. Bourguignon__
> 
> 
> 



Multiple processing compiling the same file

2018-01-23 Thread Jim Newton
If I run several sbcl processes on different nodes in my compute cluster, it 
might happen that two different runs notice the same file needs to be 
recompiled (via asdf),
and they might try to compile it at the same time.  What is the best way to 
prevent this?

I see in the asdf documentation that there is an asdf:*user-cache* variable 
whose
value is the path name of the directory where asdf compiles into.Would it 
be advisable for my to arrange so that asdf:*user-cache* is a function of the 
pid and hostname and perhaps thread-id (if such a thing exists) to avoid such 
collisions?

Or is there some better way to handle this which is build into asdf?

how to "register it in front of the former"

2016-04-14 Thread Jim Newton
Can someone help me with an asdf problem I don’t understand.

When I load quicklisp I get the warning (below) apparently from ASDF.
Advise (b) says that I might: install a newer ASDF and register it in front of 
the former in
   your configuration,

Can someone explain to me how I can do that?

I’m working on a machine where I don’t have administrator rights so I can’t 
change anything in /usr etc.
I’ve installed and compiled sbcl locally, as well as downloaded a local copy of 
the newest asdf
under my home directory.

Thanks.
Jim

WARNING:
   You are using ASDF version 3.1.3 (probably from (require "asdf") or loaded
   by quicklisp) and have an older version of ASDF 3.0.2 registered at
   #P"/usr/local/share/common-lisp/source/asdf/asdf.asd". Having an ASDF
   installed and registered is the normal way of configuring ASDF to upgrade
   itself, and having an old version registered is a configuration error. ASDF
   will ignore this configured system rather than downgrade itself. In the
   future, you may want to either: (a) upgrade this configured ASDF to a newer
   version, (b) install a newer ASDF and register it in front of the former in
   your configuration, or (c) uninstall or unregister this and any other old
   version of ASDF from your configuration. Note that the older ASDF might be
   registered implicitly through configuration inherited from your system
   installation, in which case you might have to specify
   :ignore-inherited-configuration in your in your
   ~/.config/common-lisp/source-registry.conf or other source-registry
   configuration file, environment variable or lisp parameter. Indeed, a likely
   offender is an obsolete version of the cl-asdf debian or ubuntu package,
   that you might want to upgrade (if a recent enough version is available) or
   else remove altogether (since most implementations ship with a recent asdf);
   if you lack the system administration rights to upgrade or remove this
   package, then you might indeed want to either install and register a more
   recent version, or use :ignore-inherited-configuration to avoid registering
   the old one. Please consult ASDF documentation and/or experts.