Thanks for the info and the suggestions. Seems I was mistaken, I had hoped
cpiobin would help get around
the errors. Although on "(and one would only have to distribute the delta
change rather than *everything*).",
I unfortunately have a large numpy array which is the major cause of the
size, and would change frequently.
[Basically a classifier that changes as training set is updated, I am
looking into using better algorithms and
feature selection to limit the size]. Ideally I would have liked to present
this as a service presented to multiple
destinations, however due to lack of resources available, I am looking for
packaging it.


On Tue, Mar 5, 2013 at 5:16 AM, Jeffrey Johnson <n3...@me.com> wrote:

>
> On Mar 5, 2013, at 12:01 AM, ark ph wrote:
>
> Is this what defines the format of the archive that rpm will use? Can I set
> the path with option to cpio, e.g. from
>  http://www.gnu.org/software/cpio/manual/cpio.html if I use cpio
> --format=tar
>  the maximum size of archive can be 8589934591 bytes? For creating a very
> large
>  rpm, cpio limits the size of the archive, would this solve the issue?
>
>
> Actually its these two macros that set the payload format:
>
> #       Archive formats to use for source/binary package payloads.
>  #               "cpio"          cpio archive (default)
> #               "ustar"         tar archive
>  #
> #%_source_payload_format        cpio
> #%_binary_payload_format        cpio
>
> I'm not at all sure what the "cpiobin" macro is or does: its not in @
> rpm5.org sources.
>
> Meanwhile its unlikely that you could/would be able to "fix" a cpio
> file/payload limit problem by
> changing a macro or the payload format in any version of rpm.
>
>
> [I m building a large package, size is large primarily due to large numpy
> array
>  object, and get cpio:bad magic error; as of now cannot
>  deploy it as service hence package it with rpm; any suggestions on best
> approach?]
>
>
>
> The best approach is to rethink how you wish to distribute large archives:
> there
> are many performance issues downloading/uncompressing a single _HUGE_
> archive; you are often better off by spitting into smaller packages,
> particularly
> if some of the content changes less often the rest (and one would only have
> to distribute the delta change rather than *everything*).
>
> 73 de Jeff
>

Reply via email to