On Tue, Dec 30, 2008 at 3:46 PM, Sebastian Thiel <[email protected]> wrote: > Lets see whether I get this right: > You are saying that having an asset management system just solves one part > of the problem. > > paraphrasing robert > > If an asset management system is supposed to store resources and their > dependencies between each other, in the end you want these resources to end > up in some final result ( like a movie ). Thinking about how these resources > are organized and passed on between team-members and departments is the > crucial ( and the second part of the problem ). > > The gto file format and it's command line tools can be used to extract > production values from applications ( and their proprietary file formats ) > into a common one, creating a new version. This file can be > combined/merged/diffed with gto tools to proceed in the pipeline to build > the final product. > > I agree to the general concept, but think it's yet another part of a > pipeline in general. If something like a gto pipeline is done right, the > asset management could in fact just focus on simple revision control of > application files from which values are extracted and possibly stored in a > different and implicitly versioned layout. >
That is, for the most part, what I am saying, at least as far as finding a reason to integrate subversion (or version control) into maya. The biggest obstruction to making it work is also, in my mind, worth getting around completely and not just stepping over. If the workflow compliments the pipeline artists can keep working versions of scenes in their own sandbox or shot-sandbox areas and the asset management system can remain focused on storing data and dependencies. But this toolset is only as good as its ability to function within a greater schema, and that requires the ability to interface with the same database without having to depend on a scripted interface or embedded shell like the one available through maya... although the fancy hooks would be a compliment to a small scale facility it wouldnt really work once you try and take on other softwares, not without having to build alike interfaces that, again, and again. > > chads idea > > As this discussion focusses on asset management functioning as > dependency-aware revision control system, I will now come back to Chad's > ideas: > > Chads stores files on a storage (i.e. network ) location and a local one, > using symlinks to link a static local path to whichever revision of a > resource located on the storage location. This emphasizes on easy revision > switching ( of references for instance ) as it is made easy. He also > mentioned that this is something working in the linux world - ntfs junctions > ( hardlinks ) only work on the respective local storage and cannot link to > networkshares/mounts on a different device ( Don't know about Vista though > ). > Keeping multiple check-outs of multiple revisions of the same file on the > network would require different check-out folders as svn cannot check-out a > file in different revisions into the same folder afaik. This asset > management approach implies a folder structure that you build in some > storage location. > > my approach to local/storage locations > > When I brought this local/network data-mix up, I actually had something > different in my mind: > The network location is mounted write protected and may only be modified by > subversion itself. Its clearly not supported that some people use different > versions of some resources as only the latest version will be found on the > storage location ( it's always up-to-date ). > If a resource affects other resources ( like a rigged character affects > scene 1 and scene 2 that both use it ), and the affecting resource breaks > affected resource 1 , but not resource 2, in my approach you can do nothing > more than branching the working revision of the affecting resource > specifically for resource 2, creating a new copy of it. Alternatively, and > this is my main purpose of revision control ( so far ), one would revert the > affecting resource and try again. > Not using symlinks keeps it simple(r) and assures os compatibility. > > > about paths > > So far, we have not really talked about how to handle paths: Using > environment variables works in maya, using dirmap as well. Other > applications might not be able to resolve variables. In the end, each > application might require it's own approach to handle different storage > locations and to possibly switch between them. > Ive got lots to say on this one, but will try to be brief. We came up with some solutions in the past which focused on trying to proc everything within python and building classes which contained environment data-members for managing (and storing with pickle) the execution environment so that tasks could be reopened remotely and then executed in series on the same machine with the correct environment per task (we wanted to be able to use dependencies to create task lists which could be run remotely on a queue/render queue): Python file streams (Popen etc) are not meant to be opened infinitely and run into issues with leaving running/ghost process on machines which were hard to seek out and kill. In the case of windows or shops that have windows stations there is also the problem of the limitations of the dos shell not managing env vars correctly, or really being flexible enough to allow for any kind of reliable inheritance for variables to be passed to the process you start. We came up with some solutions that would be resemblant of a linux setup with some trickery using different built-ins, but overcoming the culture of starting everything from the desktop has been a battle... the end result of which is why I suggested a virtual-console or virtual-desktop. Using the asset picker to be responsible (via dependency) for setting the variables for JobSeqShot, MayaVersion, etc, and then supplying values to the commands we use to launch our shell scripts has led me to thinking about how to treat windows env issues in a new way. Ultimately the version controlling of maya assets, as a pipeline task, is greatly simplified when you only look at what you are trying to assemble; and more often than not that is either data for simming (caching) or data for rendering. Working backwards from these two issues and trying to limit the ammount (or complexity) of that data so that it can be easily stored and accessed through a database is where a format like gto starts to really make sense... you can spend a lot of time trying to make flat binary scene data into auditable assets OR you can spend MUCH less time tracking data whose integrity can be easily audited through the pipeline and spend the extra time optimizing schemas for passing it through the pipe. > git > > I wouldn't say that git couldn't do the job, but unfortunately I am not a > specialist on it. As far as I could figure out, it stores files much more > efficiently when they are checked-out locally as it does not store the > revision base files for each checked-out file ( effectively doubling the > amount of space used ). > Also it has a git-server which would allow to have a centralized storage if > required. > It appears though that git cannot update/retrieve just one file though, > which is required to start working locally on a locked file. Speaking of > which: Is it possibly to lock files in git ? I think its not a built-in > feature of the system. > > So in the end, svn seems to be favorable, although it's not perfect. For me > it does the job pretty well though. > Ill have to look git up, I cant say I know what it is. > conclusion > > That's hell of a topic Hradec poked into. If we could figure out the 'one' > way to get it right for most of the target users, I would be glad. > Unfortunately I do not see it coming yet as a thread/mailing list is > probably not the best way to efficiently gather information as it gets lost > in the pure mass of lines. > > So am I - lost ( for today ;) ). > Im sure we will speak tomorrow anyways, goodnight. Robert > Robert Durnin wrote: > > Ive got to chime in and offer up this kit that a friend recently > passed on to me: > > http://sourceforge.net/projects/gto/ > > Its a 3d-package agnostic geo anim caching format that has been built > strictly enough to allow for usage on "locations in space" of all > types in 3d and how to rebuild them in your renderer. > > I have been looking at some of these problems for a while, and believe > a large part of the issue with tracking scenes disappears when you are > able to break off "revisions" from "versions" and apply tracking only > to data that is being passed around inside the pipeline (and therefore > between departments/apps). If I am not being clear enough I mean > really that tracking "saved" scenes is the greatest obstacle to an > asset management tool which is almost completely circumvented when the > team only requires to pass around raw (ascii) data... in the case of > gto, streamlined data which can also be importance sampled and whose > integrity can, for the most part, be reacquired through diffable > versions. > > Breaking off the idea of a revision (a saved scene file which is used > to produce an asset) from a version (an asset which is generated by > some process of development and approval) greatly reduces the need to > rely on having to pass around binary scene files which contain a > myriad of dependency nodes (especially in the case of maya)... this > largely defines a workflow spec for facilities which may be hard to > adopt, but, in my mind at least, is much easier to develop alongside > of a set of classes with methods for integration for deconstructing > for compatibility with a database than an all-encompassing toolset > which is trying to find sneaky ways of overcoming the limitations of > binary dag flows. > > Chad: geo -> anim cache-> assembly: working backwards from the end > result towards what you want to extract from the scene and what needs > to be "gone back to" or "spliced" into new scenes for further review. > > Working entirely in maya also undermines the ability to pass an > environment alongside of the data which can be used in other apps > which might need to manipulate it: Not all apps have a functioning > scripting interface which can force via the asset management system > calls to the os, and building a set of interfaces for inside of each > app separately is a waste of time. I can forsee coming up with a kind > of virtual-desktop or virtual-console which could be used to proc new > apps or visit the database and perform some actions on the results, > but basing it in maya is closing off a lot of the other shop. > > Robert > > > On Tue, Dec 30, 2008 at 1:02 PM, Chad Dombrova <[email protected]> wrote: > > > hi all, > i agree that the system would be best as a general asset management system, > perhaps with sub-modules for different applications: > cgsvn > cgsvn.apps.maya > cgsvn.apps.nuke > > Partially Local, All on Network > > People working together in one intranet usually prefer to update only the > file they are currently working on ( i.e. the ma or mb file itself ) and > want all resources to be pulled from the network server > Setup an svn hook to keep a server side checkout of your data uptodate at > all times > > in my mind, the server-side checkout is the most complicated part of the > system and the most crucial to get right. > in a normal production environment, there are different artist working on > many shots, and each might require different revisions of the > revision-controlled assets. unfortunately, to keep things simple, SVN is > designed with a great deal of redundancy in mind -- each working might share > 80% of the same files -- which doesn't matter much when working with text > files, but for binary assets totaling in the hundreds of gigabytes range, > this redundancy becomes untenable. so the question is: what is the best > way to simultaneously provide multiple users any revision of the asset > needed, while keeping rampant disk usage in check? > some ideas for creating a production-friendly asset management system using > SVN: > > create a directory on the server which will be the "network store" and > register it with the "cgsvn" server ( perhaps through a config file ) > add a network checkout mode to the cgsvn python api. when checking out a > file in this mode, the file is checked out to the network store location, > and a symbolic link is created in the working copy that points to this > network file. > users should never directly interact with the files checked out to this > network store. the files themselves could be named with a non-human > readable hash unique per file, so that each name is unique, even for > multiple revisions of the same file. ( i believe that svn uses a hash to > identify each file already, so we could use the same hash. it might even be > available via pysvn api. ) > when the user updates a networked file in their working copy, the cgsvn > server first checks to see if the desired update revision already exist in > the network store (because another working copy is already using it as a > networked checkout). if exists, the server simply updates the symbolic link > in the working copy to point to the network file. otherwise, it checks out > the file to the network store and then makes/updates the symlink. after the > update is completed, the cgsvn server checks if the version that was just > replaced is currently used by any other working copies. if not, it is > removed to save disk space. optionally, an expiration time can be > specified, so that if it has been unused for more than x days, it is > removed. > > advantages: > > avoids creating redundant copies of enormous assets, thereby dramatically > reducing network traffic, storage space, and transfer delays. > maintains the svn "working copy" paradigm, where ALL required files are > represented within the working copy. using environment variables and/or > relative paths throughout maya/shake/nuke scenes, the entire working copy > could be relocated, even between local and network disks. > a mechanism could be provided to switch the network store symlinks to point > to a "local store" on the local hard disk for performance gains. this local > store would be organized just like the network store, but would only contain > files requested for local operation. > > disadvantages: > > symbolic links are not fully supported on windows XP (it supports something > called hard links, but i'm not sure what the difference is or whether they > are posix compatible ). i've read that symbolic links are supported on > Vista, but i have not tried it out yet. i will do some more research on > the subject. > > some other thoughts: > > trac (http://trac.edgewall.org) is based on svn, mysql, and python and > provides its own api which is svn-agnostic and therefore more future-proof. > it provides higher level functionality than pysvn api. i found it much > easier to use, but it might be too high level for this type of project, > considering how much low level svn interaction will be necessary to > accomplish what is needed. > distributed systems such as git and mercurial would not work in a production > environment because each working copy contains the entire repository, which > could be multiple terabytes of data. > svn would be my choice of cvs because it is actively being developed and it > provides hooks for using custom diff and merge tools, which would let us > create and integrate custom tools for each application -- maya, nuke, shake, > etc -- that give more meaningful diffs than a straight text-based diff. > > i'm very interested in this project and would love to collaborate, but from > my standpoint the maya integration would necessarily come after the creation > of a more general revision control aimed at a CG production environment. > i'm in the middle of doing research for my own solution, but it was not > until i read this thread that my ideas really crystallized. i'm eager to > hear everyone's feedback. great discussion, keep it up. > > -chad > > > > Supporting all possible back-ends means you would generalize the version > control system API to support any possible one. This also means you have to > put additional work in writing the middleware, and possibly stick to the > smallest feature commonality between all the actual back-ends you want to > use. > > To my point of view, dedicating to one backend will not only get you started > faster, choosing svn for that is probably not even a bad thing as it is > working wonderfully in general and with maya. > > A compromise would be to subclass the pysvn Client class providing your own > svn-style interface ( possibly improving the pysvn client ), which would > give you the option to use it as an adapter to another version control > backend later on. > > To nicely use svn within maya, you basically need the following ( keeping > maya on multiple os's in mind ): > > setup svn repository such that all resource files ( .ma, textures, caches, > possibly everything ) requires a lock > Assure scene are truly uptodate when maya opens them ( using Scene > Messages/Callbacks ) > Possibly support two modes of operation: > > All local > > People are working fully decentralized ( ok, they have the svn repo on some > server ) and need an up-to-date local copy of their files on local disk. > Update scene files and all references/resources upon scene open ( requiring > you to hook into maya a bit using callbacks ) > > Partially Local, All on Network > > People working together in one intranet usually prefer to update only the > file they are currently working on ( i.e. the ma or mb file itself ) and > want all resources to be pulled from the network server > Setup an svn hook to keep a server side checkout of your data uptodate at > all times > > I use approach no. 2 as updating files is costly ( even though the actual > process takes less then a second, its a delay people feel ). > To be truly general though, you would need to support both ways, requiring > you to have some sort of config system to set you up accordingly ( .ini > files ? .xml ? ). > > The similarities between 1 and 2 are mainly that at least one file needs to > be updated before the scene actually opens ( the scene itself ). In 1. you > additionally parse all dependencies ( recursively, possibly slow if not > cached ) and update these dependencies as well. > > Here you see that approach one gets complex and time consuming, but would > need to be supported if the hobbyist's requirements should be met, allowing > collaboration due to the svn server. > > Zac Shenker wrote: > > Ideally what I would like to produce is a tight integration of an asset > management solution into the Maya interface. > > As you suggested it would probably be worthwhile to develop the asset > management side in a form that is independent of Maya and then write a Maya > plugin that makes use of the asset management code. > > I am now considering developing the asset management side to support more > than just SVN, at this stage looking at CVS, SVN, git but the system design > would allow for integration of any version control system that one wants to > write a module for. > > So far my asset management experience includes using CVS & SVN on code > projects and trying to use SVN on Maya projects. > > Zac > > On Tue, Dec 30, 2008 at 4:00 AM, Sebastian Thiel <[email protected]> > wrote: > > > This whole asset management part sounds like a general pipeline approach, > with svn as data storage and perhaps database ( ->properties ). > > Doing that would properly would make many people, including myself, quite > happy as this is one major backbone of any pipeline. > > Not being too familiar with Maya is probably won't be too much of a > problem as you can boil the system down to actually know about maya in the > end, by plugging it into your framework. > > Thus whenever one says asset management, I do not see maya in the first > place, but files and how they are organized. This is useful for many other > applications as well, that could play together with your asset management > system. > > Perhaps I am over-generalizing making things more complicated than they > actually are, and its absolutely fine to start 'sane' and focus on one > application at first to gain first experiences with asset management. > > Lets see how it goes for you - in my head it looks like a large project > requiring a good foundation. > > Zac Shenker wrote: > > At this stage the main focus is to provide a solution for the end users, > but in saying that where possible I am building Python Classes that provide > alot of useful functionality so it certainly could be useful for developers > as well. > > Personally I have quite alot of experience with Python and SVN but am new > to programming for Maya. > > I would like to ultimately include features that improve the pipeline and > workflow processes, such as: > Going through the Maya scene files and discover file textures that are no > longer being used. > Provide Version Control over all assests in the project > Easy to use management of revisions and file locks > Easy to use support for branching. > Possibly some basic support for automated merging and conflict resolution. > > Regards, > Zac > > On Mon, Dec 29, 2008 at 9:33 PM, Sebastian Thiel <[email protected]> > wrote: > > > Who will be your target audience ? Will it be programmers that want to > use your integration in their environments, or will it be users that just > need nice user interfaces ? > > Can you target both ? > > From my perspective, pysvn already did the job for me. It does not come > with any user interfaces, but for pipeline work, these can be very very > simple. > > Something I had to improve though is the pysvn client to be a little > easier to use when trying to check-out or update files in a deep directory > structure which does not yet exist locally. If your project, in addition to > that, would provide a nice modular user interface, it could workout well. > > Hradec wrote: > > Sounds really intersting man... I have being thinking about the potential > of using svn in maya for a sort of asset management system for quite some > time, which is a concept that your project could be used for I think... > > I would be really interested in discuss more about the subject, and maybe > even contribute to your project... > > congrats for the effort and initiative... > > -H > > On Mon, Dec 29, 2008 at 1:01 AM, Zac Shenker <[email protected]> wrote: > > > Hi Farsheed, > > Thanks for the comments. This project is still very much a work in > progress, it still has a number of hardcoded variables and is missing a > reasonable amount of the interface integration. > > If there is any one out there interested in this project I would be > happy to discuss design and goals of the project a bit further before coding > much further. > > Regards, > Zac > > On Mon, Dec 29, 2008 at 5:16 AM, Farsheed Ashouri <[email protected]> > wrote: > > > Nice work, Thank you. > Sincerely, > Farsheed. > > > > > > > > > > > > > > > > > > > > > > > > > > > --~--~---------~--~----~------------~-------~--~----~ Yours, Maya-Python Club Team. -~----------~----~----~----~------~----~------~--~---
