>
> Why is this extremely bad for usability? Would it be bad if we
> automatically synced the sources?
>

Well, I just consider having to type `cargo sync` whenever a package
changes bad usability. We could have a compromise to make it automatic (see
the bottom of the message).


>  > <url>/crates.json - access to a an array of all packages in the
> > source (by name), used by the cargo list feature
> > <url>/crates/<name>.json - access to a certain crate's information by
> > name (equivalent to the objects inside packages.json
> > OPTIONAL: <url>/crates/<uuid>.json - access to a certain crate's
> > information by uuid (will probably change)
>
> Can we put the uuid in crates.json so we don't have to list the crate's
> information in two files? Presumably the uuid never changes.


OK. That was a compromise in order to be able to install by uuid with one
fetch. If it was in crates.json, it would need to fetch crates.json and
then the crate file. I guess that is a bad setup.


> > OPTIONAL: all of the above files have a .sig file each. Only require
> > if the source.json file specifies a key.
>
> Making Graydon sign so many files is asking a lot.


That's why I was suggesting it be a dynamic website. But yeah, I understand
that suggestion was pretty silly.

Under the proposed scheme I would imagine 'name', 'uuid' 'tags' and
> 'description'
> still need to go into crates.json so that they can be discovered from the
> UI.
> The other two are details that could be left to the package-specific file
> and
> only retrieved during install. Is that right?


Yes. But description isn't even used in searching yet. The crates.json file
really shouldn't even need to be there, it was just so there could still be
static support and still have searching / listing.


> Do you have scenarios in mind where we might want to implement this API
> instead of leaving it up to static file serving?


There is reasons to switch from the Github repo, but not reasons to not use
static files. You have a point. If you were to have a dynamic website, we
could have submission by means other than Github and an online index of
files without having to fetch the packages.json files from Github to index
packages.

>
> Do we know how npm organizes its package index? I consider npm to be pretty
> rad and would be inclined to do anything that they do.


It works like I am proposing. Whenever something is installed it fetches
information from the API for a certain package.

`npm install blah` => https://registry.npmjs.org/blah (json file of
information about the package)

However, it does have a local list of packages. When it fetches the package
list it is incremental only. So it works halfway between what we have and
what I am proposing.

If everyone wants to stick with the packages.json file setup, we could make
it automatic by having a checksum of the packages.json file that is
downloaded every time something is calling upon remote packages to and
automatically `cargo sync`s if the checksum is not the same as the local
one. Maybe even make it so Git sources don't need this and it just does a
git pull everytime you try to install / search / list. However, I think we
should include my `cargo sources add` (inc. sources.json and source.json
files) idea so we can have source management, because there's really no use
having sources built in if the user can't manage them from the CLI. What do
you think?
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to