Introduction
A few years ago, we took the first steps in implementing a global pipeline at Sony Online Entertainment. Since then we have iterated and abstracted the essence and the backbone of that pipeline. At the end of May 2011, I held a Game Connection Master Class in Paris that covered this set up and some other tech art related stuff. I decided to make this blog article that covers the nuts and bolts of that 7 hours long master class. First off, lets define what I mean by “Global Pipeline”. For us, it means a company wide art pipeline foundation or framework. It carries delivery mechanisms of tools and workflows to our DCC apps, our Python environment, 3 levels of foundational pipeline data (global, team and user), all set up in such a way that we can easily implement tools for multiple teams simultaneously.
The biggest challenge to multi-team coding was taking variations in folder structures and naming conventions out of the equation. A major role of this set up is also to deliver the entire code depot and Python environment without having artists installing Python, or any of the many modules, locally on their machines. We also want the ability for different teams to run the global tools from multiple locations or in different “modes” (local, P4, network drive, as well as the developer mode) while still having the ability to easily sync to updates, that comes from a single source code depot.
Before talking about the implementation, I will give some background and history that led to the latest iteration, which was a complete re-write of the whole pipeline foundation.
Background
A Tool Delivery Mechanism
Many years ago, when I first started out in MEL land, we repurposed a script from Highend3d.com (clipFX.mel) that had a section of code that built a Maya menu from a folder structure. Check that folder structure into P4, have all the artists sync up, modify their userSetup.mel to call the menu building script and viola – we have a tool delivery system. That was the bare bones starting point.
This toolbox then grew, and tons of MEL code was piled into it. Each new team at the company then tended to duplicate this tool box around and any tools that were improved upon would usually stay isolated to the team that had made the improvement.
The second step and the first attempt to a global tool box, was that the Perforced source code of all generic tools was pushed out to a network location that the entire company could get to. Now each team did not have to duplicate things around and the tech artists contributing to it are all working from the same source code.
Initial Improvements& Added Complexity
The following subsections equals a few years of evolution to this starting point and explains how we arrived at complexity that forced us to go back to the drawing board and build our new pipeline foundation, which is the last portion of this article.
From Eval Sourced To Runtime Commands
In the early versions, each tool was eval sourced upon menu build, which happened on Maya startup. That is a very bad situation because a single error would halt the tool menu from loading.
As the tools piled in, this also made for a sluggish boot up of Maya. An improvement was that we build a run time command for each tool, which was then attached to the command of the menu item.
All the run time commands were saved into their own category heading, which makes them into a unique group in the hotkey editor, which gives the artist the option to hotkey any tool belonging to the tool box inside Maya.
Menu Preferences
Menu Modes
As you installed the global tools you could pick where to run it from:
There were also mechanisms built in that would allow a user to switch modes on the fly.
Conditionally shared sub menus
As you installed the tools, you picked team association(s) and active team. The global tools would then expose certain sub menus if the team was in turn associated with a specific game engine.
Say that some teams were using Unreal, you would only want to expose the unreal specific tools and exporters to those team members. There were also mechanisms in place that allowed the user to change team associations and active team on the fly.
Procedurally Spawned Team Menus
Team menu spawner – As the global tools are more generic, we abstracted templates that were parsed and filled in with key variables so that each team could very quickly spawn their own tool box for things completely custom to that team. This spawner had a simple user interface were you just fill in the needed info and then you were up and running in no time.
The global team toolbox was aware of any local team ones, so we could unify the preferences in one spot. Team menu systems also contain two important hooks created by the global system, a team save intercept and a scene open intercept , where anything about a scene can be enforced (scene units being a good example).
Python
In Maya 8.5, – Python slithers in and we also now configure the python environment for PyMel as well as Python as part of this set up.
Summary
Let’s take a look at were we stand at this point:
Transition
The transition, will quickly go through major milestones that we hit concurrently with our improvements and iteration. It mainly deals with an expansion into Python, PyMel and Object Oriented Programming approach and how each is beneficial to us in the end, I know that I will be preaching to the quire in many cases here, but I will keep on preaching in case, there are a few of you, who have still not made this transition.
At the end of the transition section, we will take a step back, crack our knuckles and get into the implementation of our new pipeline foundation that alleviates our situation described in the summary above.
The Zen Of Python
There are so many things to really like about Python and in the following sub headers, I bring up a few things that I love about python and that I use every day. They are also present in the implementation code we will go through later and I want to make sure that everyone will be able to follow along.
One of Python’s main features is listed in PEP 20 – The Zen Of Python, which is “readability counts”. The truth is, the more readable your code is, the faster you and everyone else can work. The other points listed in PEP 20 are not bad either
- Readability Counts
- Simple is better than complex
- Complex is better than complicated
- Flat is better than nested
- Sparse is better than dense
- There should be one – and preferably only one – obvious way to do it
- Although that way may not be obvious at first unless you’re Dutch
- If the implementation is hard to explain, it’s a bad idea
- If the implementation is easy to explain, it may be a good idea
String Formatting
String formatting is great, it allows you to have a better overview over the string you are building (compared to concatenation), as well as it can cast from different data types without breaking. No more “cannot concatenate ‘str’ and ‘float’ objects” -errors for you.
# Python String Formatting
# Casting from different variable types, which breaks when using the "+" method
myNumber = 43.67
exampleString = 'The chance is' + myNumber + 'percent'
>>> TypeError: cannot concatenate 'str' and 'float' objects
exampleString = 'The chance is %s percent' % myNumber
>>> 'The chance is 43.67 percent'
# It is also much easier to read, especially when you are building longer strings
import getpass
# Instead of doing this
pathToMyDocs = 'C:/Users/' + getpass.getuser() + '/Documents'
# Do this
pathToMyDocs = 'C:/Users/%s/Documents' % getpass.getuser()
P.S This style of string formatting is being phased out in Python3, so if you want to future proof, you can use the .format method, which is available in Python 2.6+. Thanks to Jason Parks for pointing this out. Examples below.
# Python String Formatting using .format
# Positional arguments
print '[{0}, {1}, {2}]'.format(1, 2, 3)
# Named arguments
print '[{one}, {two}, {three}]'.format(three=3, two=2, one=1)
# Sequential without specifying index Python - 2.7+ only
print '[{}, {}, {}]'.format(1, 2, 3)
Complete Filesystem Package
os.walk – traverses directory structures and while doing so give you three parts of the current path in the loop – filename, directory, and root (full path to the filename). Typically tech artists do a lot of directory crawling when we build pipelines and work flows and os.walk is invaluable for this purpose.
“path” is a standalone package that anyone can download and implement, it is super useful with anything dealing with files and folders. You can find this package here.
import path
import os
xmlFiles = []
for root, file, dir in os.walk(myPath):
fullPath = path.path('%s/%s' % (root, file))
>>> 'C:/test/myTest/test.xml'
if fullPath.ext == 'xml':
xmlFiles.append(file)
twoDirsUp = fullPath.parent.parent
>>> path('C:/test')
fullPath.isFile
>>> True
fullPath.isDir
>>> False
fullPath.basename()
>>> 'test.xml'
fullPath.namebase
>>> 'test'
Built in XML & SQL Libraries
Having the ability to use one of the most common data container formats in the world is key. Element Tree in Python rocks and is very straightforward and easy to use. In the example below, we parse an XML file, get the tree iterator, get an attribute value, eval getting an attribute (nice Python trick to enable us to input Python types straight into the XML and when evaluated it reads in the correct type), setting an attribute value, writing the XML back out.
testXML
Core
attributeName="myAttr"
attributeName2="['one', 'two', 'three']"
/Core
/testXML
import xml.etree.ElementTree as ET
def xmlUsage():
""" XML Usage Example """
XMLPath = 'C:/test.xml'
testXML = ET.parse(XMLPath)
core = testXML.getiterator('Core')[0]
mVal = core.get('attributeName')
mVal2 = eval(core.get('attributeName2'))
core.set('attributeName', 'modifiedAttr')
testXML.write(XMLPath)
!-- Result From Above Python --
testXML
Core
attributeName="modifiedAttr"
attributeName2="['one', 'two', 'three']"
/Core
/testXML
A variety of database modules are also available if you want to use that.
Logging replaces all print and debug statements
Python has an awesome logger module that replaces all your debug print statements. We will look at the logger more closely in the upcoming PyMel section.
name not __main__
A really neat trick is to put the statement “if not name == __main__:” outside your class. What follows after that if statement is only run if the class is being imported. Conversely, a subsequent else would only get run when instantiated. This allows us some really nice control over when certain things would be run and we will be using this functionality later on.
# File name = test.py
class ClsA(object):
""" My A Class """
def __init__(self):
super(ClsA, self).__init__()
print 'Running Class A'
def methodA(self):
print 'Running Method A'
class ClsB(object):
""" My B Class """
def __init__(self):
super(ClsB, self).__init__()
print 'Running Class B'
if __name__ == 'main':
ClsA()
else:
ClsB()
import test
>>> 'Running Class B'
myClassInstance = test.ClsA()
>>> 'Running Class A'
myClassInstance.methodA()
>>> 'Running Method A'
Universal Variables
Anoher handy trick is to make a module containing variables. Importing this module you can call the variables like below. This then becomes a very flexible variable and changing it in this module will change it everywhere it is called in your code library. If a global var is…well…global, this would be …universal…yepp let’s call it universal variables.
# uVars.py
xToolVersion = 3.2
emergency = 0
uPath = 'C:/Very/Important/Path'
import uVars
currentVersion = 3.1
if not uVars.emergency:
if uVars.xToolVersion < currentVersion: updateStuff
myLoc = '%s/python/someFile.py' % uVars.uPath
Natural Transition to OOP
By coding with Python, you will inevitably start investigating Classes. When discovering them, and the power of Object Oriented approach, you will never look back. If you are completely new to Python and know MEL or some other comparable scripting language, I will briefly demonstrate the use of functions in Python. This approach is very much how we went about our daily lives with MEL in Maya for the most part.
# functions.py
def myFunction():
""" Function that does something """
print 'I am running function.'
def myOtherFunction(info, moreInfo=''):
""" Flexibility of Python, but be aware """
passed = info
print type(passed)
print 'I am printing type(%s) - %s' % (type(passed), passed)
passed = 4.3567
print 'I am printing changed type(%s) - %s' % (type(passed), passed)
if moreInfo: print 'Running that too - %s' % moreInfo
def getCapitalOfCountry(country):
if country == 'France': return 'Paris'
# Usage of above module
import functions
functions.myFunction()
>>>'I am running function.'
functions.myOtherFunction('test', 'moreTest')
>>>'I am printing type() - test'
>>>'I am printing changed type() - 4.3567'
>>>'Running that too - moreTest'
# If you are making changes to your code, this is how you refresh without having to re-import
reload(functions)
# Store variable
cap = functions.getCapitalOfCountry('France')
print cap
>>>'Paris'
We will talk about Object Oriented Programming and the use of classes next.
Object Oriented Programming
The example below illustrates how to set up a class, how to instance a class, and how to call it. If you don’t have anything you need to pass to the class everytime it is called we can make an instance object of the class and use that instance object throughout our code. All the methods (same as function, but is the proper terminology when part of a class), will be attached to the class instance object. If you are in an IDE, this is a speed multiplier like no other, every attached method popping up right at your finger tips is a considerate speed multiplier. The __init__ portion of the class is run upon instantiation. The “self.” makes a variable global within the class so you can get to it from any other method as long as it has been created beforehand.
# classes.py
class MyClass(object):
""" Top Class """
def __init__(self):
super(MyClass, self).__init__()
self.country = 'Sweden'
def getCapitalOfCountry(self, country=''):
""" Method that returns capitals """
if not country: country = self.country
if country == 'France': return 'Paris'
elif country == 'Sweden': return 'Stockholm'
# Usage Of Above
import classes
classObj = classes.MyClass()
classObj.getCapitalOfCountry()
>>> 'Stockholm'
One of the most powerful concepts with OOP is inheritance.If yo set up a new class as shown below, all the methods of the inherited class will now be part of the new class instance. Another thing worth noting is that “self.country” overloads the inherited class variable. Methods work the same way. If I were to insert a “getCapitalOfCountry” method into “MyOtherClass”, it would overload the “MyClass” one.
# classes.py
class MyClass(object):
""" Top Class """
def __init__(self):
super(MyClass, self).__init__()
self.country = 'Sweden'
def getCapitalOfCountry(self, country=''):
""" Method that returns capitals """
if not country: country = self.country
if country == 'France': return 'Paris'
elif country == 'Sweden': return 'Stockholm'
class MyOtherClass(MyClass):
""" Top Class """
def __init__(self):
super(MyOtherClass, self).__init__()
self.country = 'France'
# Usage Of Above
import classes
classObj = classes.MyOtherClass()
classObj.getCapitalOfCountry()
>>> 'Paris'
For convenience sake you can make a utility class object at he end if your file. When you import the module, you will have access to this utility object. We can also make a class that inherits all the other classes and when it is instantiated, it will hold everything (in your IDE everything available pops up at your fingertips).
# classes.py
class MyClass(object):
""" Top Class """
def __init__(self):
super(MyClass, self).__init__()
self.country = 'Sweden'
def getCapitalOfCountry(self, country=''):
""" Method that returns capitals """
print 'MyClass method'
class MyOtherClass(MyClass):
""" Other Class """
def __init__(self):
super(MyOtherClass, self).__init__()
self.city = 'Trollhattan'
class MyThirdClass(MyClass):
""" Third Class """
def __init__(self):
super(MyThirdClass, self).__init__()
self.state= 'California'
class MyUberClass(MyOtherClass, MyThirdClass):
""" Third Class """
def __init__(self):
super(MyUberClass, self).__init__()
uberObject = MyUberClass()
# Usage
import classes
classes.uberObject.getCapitalOfCountry()
If I were to summarize Object Oriented Programming, these would be the bullet points:
- Powerful Scaling
- Highly Reusable
- Organization Friendly
- Extremely powerful and fast in an IDE
The Languages Of Maya
Here are your options for scripting in Maya today with an attached example of each:
- MEL – Maya Embedded Language. This started it all! Thanks MEL!
setAttr ("pCube1.translate") 0.7 0.7 0.8;
- Maya.cmds – Python wrapped MEL
setAttr ("pCube1.translate", 0.7, 0.7, 0.8)
- PyMel – Pythonic Maya language
mCube = PyNode('pCube1')
mCube.translate.set(0.7, 0.7, 0.8)
Which one is most legible and easiest to read? Let’s talk more about that one.
PyMel
There is an awful lot of good things to say about PyMel. It is class based and Object Oriented. When you cast any Maya node to a “PyNode” (this happens automatically most of the time), you are given a class instance of the type that belongs to that Maya node (joint, transform, vertex, edge etc etc). Attached to this class instance are tons of useful convenience methods. This results in extremely clean and readable code. remember PEP20?
Readability Counts
In the example below, we loop through all the vertices of a mesh and store all verts on the boundary of the mesh. we also verify that the initial node is of the correct type….
mNode = selected()[0]
if isinstance(mNode.getShape(), nt.Mesh):
boundaryVerts = []
for vert in mNode.getShape().verts:
if vert.isOnBoundary():
boundaryVerts.append(vert)
select(boundaryVerts, r=1)
API Hybridization
A huge sell of PyMel, which is just the term for what is going on behind the scenes when the casting of a Maya node to the class instance occurs. The class instance contain an API connection to the Maya node. The result is that you get an API created “name-independent representation” of your node.
If you have scripted lots in MEL, you will recognize below example’s nightmare.
string $myJoint = "joint1";
// Result: joint1 //
rename "joint1" "joint2";
// Result: joint2 //
select $myJoint;
// Error: line 1: No object matches name: joint1 //
With PyMel, this nightmare is no more…you can even rename it manually in the scene and the select portion still works.
myJoint = PyNode('joint')
myJoint.rename('joint2')
myJoint.select()
You can see that the methods are directly attached to the object. You can in code find out what methods are attached by running below code. Again, if you are in an IDE, all those methods will pop up for you to choose from.
myJoint = PyNode('joint')
for method in dir(myJoint): print method
MEL call wrapper
When you start coding in Python and PyMel, you are not going to want to go back. there might still be times where you have to call some MEL procedures. That’s where PyMel’s MEL call wrapper comes in handy. Keeps your syntax flow consistent, and again…is way more readable.
Maya.cmds
import maya.mel as mm
mm.eval('skinWeightIO -p "%s" - m 0;' % path)
Pymel
mel.skinWeightsIO(p=path, m=0)
Pymel Logger & Logging Menu
Earlier, we briefly mentioned Python’s logging mechanism, let’s explore that mechanism in PyMel. Wen you write code you will want to do simple debugging sometimes and the old school way is to put print statements of your variables and results everywhere. Before shipping your tools out to the artists, you comment out everything so that the script editor doesn’t spam all that every time your tools is used (but you still want them to see some feedback).
Enter PyMel logger. You can assign logger calls certain levels (by default debug, info, warning, error, and critical). When your code runs the logger instance level is used to determine what from the various logger calls will output. Additionally, you can enable a logging menu inside Maya that will have sub menus for all existing loggers and that allows everyone to set their own logging levels. if you sit down at someone’s machine and want to see all your “spam” (debug output), simply set the logger to debug level. Examples below.
from pymel.internal.plogging import pymelLogger
# Default Levels - debug shows you how to convert from a listArray to a log string
pymelLogger.debug('Found %s shaders:\n%s' % (len(shaderList), '\n'.join(shd.name() for s in shaderList)))
pymelLogger.info('User Message')
pymelLogger.warning('Colored Text Output')
pymelLogger.error('Colored Text Output - That red output box we all know and love in Maya')
pymelLogger.critical('Critical Message')
# Manage Levels Through Code:
pymelLogger.setLevel(logging.DEBUG)
pymelLogger.setLevel(logging.INFO)
# User Can Manage Levels Through The Maya Logging Menu - This is how you enable it
from pymel.tools import loggingControl
loggingControl.initMenu()
If you get an error clicking on the logging menu in Maya, replace the …pymel-1.0.3/pymel/tools/
Disadvantages
The MObject conversions in API makes PyMel slightly slower, which is a very small price to pay for what you get. If you will be doing heavy vertex iteration over 100′s of thousands of verts, you probably want to use API anyways. That one issue is also being worked on.
Code Depot Structure
I believe in a well structured code library, which will get easier to build upon and keep organized as you grow.
Cells to Organ Systems
Approach it like building a human – smaller highly reusable cells that build into larger systems.
A Python Package is a folder that contains modules, which are Python code files. Our package structure is laid out like the below image. We have a common package collection (light red) that is not tied to any specific DCC app. Many of these packages are utilized in multiple DCC apps, as well as compiled standalone Python programs. We then have package collections specific to our DCC apps (light green) and a unitTests package collection.
Notice the MayaTools package collection, it is laid out in the cells to organ approach. PyMel and PyQt is very foundational in nature. Maya is our core package and everything outside it are tool and pipeline packages.
Let’s take a closer look at the Maya Core package, the modules are organized like the below image. Again, we follow the human building approach. mayaCore is our cellular level module and contain very foundational building blocks (green). The specialized core modules (blue) is a bit more specific and addresses more specific needs. The system frameworks (yellow) are generic frameworks ready to plug into pipelines, these are built up from everything above them.
Maya Core & Utility Objects
Let’s take a look at the structure of our Maya core module(most generic). For the most commonly used classes, utility class instance objects are included. We also Include easily accessible and often used objects such as Perforce, logger and schemaParser class instances. Code below is to just show structure so tons of subclasses and methods are omitted and the few that remain just shown like they are toggled closed in the IDE.
class Maya(object):
def __init__(self):
super(Maya, self).__init__()
# Foundational Parse Object
self.soegParseObj = SOEG_parser.ParseSOEG()
# Parse foundational XML data
self.soegParseObj.parseSOEG()
# Make logger
self.setLogger()
# Perforce
if self.soegParseObj.isp4Active:
self.p4 = soe_p4.P4Lib()
class MayaReferenceCore(Maya):
def __init__(self):
""" MayaNamespaceCore.__init__(): set initial parameters """
super(MayaReferenceCore, self).__init__()
def refreshRefs(self, node='', sync=1):
def getNestedRefs(self, ref='')
class MayaNamespaceCore(Maya):
def __init__(self):
""" MayaNamespaceCore.__init__(): set initial parameters """
super(MayaNamespaceCore, self).__init__()
def nameSpaces(self, case):
def removeNameSpace(self, ns, nuke=0):
class MayaMeshCore(Maya):
def __init__(self):
super(MayaMeshCore, self).__init__()
def cleanMeshes(self, meshes=[]):
def verifyMeshSelection(self, all=0):
class MayaCore(MayaNamespaceCore, MayaReferenceCore, MayaMeshCore):
def __init__(self):
""" MayaCore.__init__(): set initial parameters """
super(MayaCore, self).__init__()
mCore = MayaCore()
A bit of history on the above module. This module used to be an huge file filled with functions. The sub-classing is purely for organizational purposes and it makes it easier to find what you are going to add and edit through the IDE outliner. On the coding side all the methods are attached to a single object so no time has to be spent, remembering in which cellular module certain functionality lives. The way I would advice to use it is as follows:
from MayaTools import mayaCore
mayaCore.mCore.cleanMeshes()
mayaCore.mCore.removeNameSpace(ns='myNamespace', nuke=1)
To run things smoothly, figure out where to place your methods. Are they Maya specific? Could it be common to all? It is worth taking a few minutest to think it through…
IDE
IDE stands for Integrated Development Environment and trust me, once you go there you will never go back (and I mean ever). They contain super powerful features that will speed up tool development.
Eclipse
Eclipse as IDE (w/PyDev) is great option. It is free, open source, very popular; which means a wealth of awesome add ons and plug ins. One of them – PyDev – turns Eclipse into a Python powerhouse.
Below are some great features you get with an environment like this:
Mark Occurences
Fastest way to see what is happening to your vars. Highlighting a variable will highlight all occurrences of it in your code.
Code Completion
This where our and PyMel’s class structure really shines, it is ultra powerful. Your development environment is fully aware of all your code. You start typing and Eclipse will show you the way.
Tool Tips
With tool tips, memory not required. You are given pop ups with doc string and a preview window to the full method. You are even given a hyperlink that will open the code in Eclipse if there is a need to edit what is being referenced.
Auto Import
This is context insensitive code completion. You code away, happily calling your classes and methods and Eclipse will find them and make the import statements at the top automatically.
Code Analysis
Not only is PyDev aware of the environment, but it keeps a tab on your code as well, in real time. It will tell you about no “self” token in class methods, unused imports and variables (keeping your code fat free), mixed/bad indentations and syntax errors.
The net result is that you don’t have to find out about these things through trial and error detection and hence save you time.
The Outliner
The Outliner shows all and is a very fast way of traversing through your code.
Debugging
Debugging in an IDE is huge. The debug session is using your mayapy.exe interpreter, meaning you are running the full Maya environment in your IDE. You can stop the code in live Maya session as well as capture artist’s session’s and debug in real-time.
To the left is a debug session is Wing. Wing is a little more robust than Eclipse when it comes to debugging Maya.
Implementation
Ok, so after all that history and evolution, let’s tackle the implementation of our new pipeline foundation.
Goals
Here are a few goals we defined for ourselves before starting the task:
- Deliver the Python environment without local installs
- Provide tool delivery system to DCC apps
- Files important to the set up should not be local
- Built In logging and logging control
- Set up layers/levels of accessible data – global, team & user levels
- Compile a Pipeline Foundation Setup & Management Tool with the following features:
- Data driven backend (XML for portability)
- Easy install, manage and debug
- Multiple modes
- Team Save Intercept
- Team Scene Preferences
- Unified Team menu spawning with access to data
- Data feedback on tool usage
We have a few DCC apps that get their environment delivered to them. Let’s take close look at how this is achieved for Maya. The basics of Maya boot up is that there is a Maya.env file that gets called very early on in the boot process followed by a userSetup.mel (and/or .py). The image below shows the basic one.
Below is a flowchart, that will help us visualize the boot up process we have implemented.
Maya.env
Maya.env is read very early in boot up process (See #1 in the Maya Boot Process Image). We insert the path to PyMel, as well as the path to PyQt here.
PyMel ships with Maya, but we have moved it out of there to be able to update it independently from Autodesk.
These two have to be inserted super early in the boot up process so the Maya.env was the best place for us.
// Resulting Maya.env
SOEG_PYMEL = W:/Tools/SOEglobal/python/SOEmayaTools/pymel-1.0.3
SOEG_PYQT = W:/Tools/SOEglobal/python/SOEmayaTools/PyQt4/2012/win64
PYTHONPATH = $SOEG_PYMEL;SOEG_PYQT
userSetup.mel
userSetup.mel is the gateway for going non local, from here, we insert a call to mayaMenuBoot.py, which is on our network (See #2 in the Maya Boot Process Image).
mayaMenuBoot.py
import sys
import os
import xml.etree.ElementTree as ET
import imp
if not __name__ == 'main':
try:
# Parse the XML
if os.path.exists("//Sdlux3/Projects2/StudioArt/SOEglobal/installData/SOEG.xml"):
menuModulePath = '%s/%s' % ("W:/Tools/SOEglobal", ET.parse("//Sdlux3/Projects2/StudioArt/SOEglobal/installData/SOEG.xml").getiterator('Core')[0].get("menuModuleRelLoc"))
sys.path.append(os.path.split(menuModulePath)[0])
else: raise "network down"
# import sysGlobalMenu
fp, pathname, description = imp.find_module(os.path.basename(menuModulePath).strip('.py'))
startModule = imp.load_module(os.path.basename(menuModulePath).strip('.py'), fp, pathname, description)
# Set Off the creation & Set up
startModule.MayaMenu().startUp()
except:
startModule.MayaMenu().injectPaths()
else:
print 'Running from main'
startModule.MayaMenu().injectPaths()
Three important files are involved here: SOEG.xml (data file that contains global and team data); sysGlobalMenu.py (the mayaMenu class that is responsible for everything listed to the right in the Maya Boot Process Image above; and the file we are looking at itself (mayaMenuBoot.py).
Let’s pick apart what happens:
1. The path to the mayaMenuBoot module is built up and appended to the sys path. The reason the path is built up like this for each user is that it may be in different locations for different people, dependent on what mode they decided to install the pipeline foundation. The file we are looking at is actually templated and filled in when the pipeline is installed. We will talk more about that later. Let’s focus purely what happens in the code.
When the module is loaded (imp.load_module – line #16), the “not name == __main__:(actually the else in this case ) portion on the bottom of the sysGlobalMenu class is run (reasons for which were discussed in the Python section). For clarity’s sake the bottom part of that class is shown below. When the else is run the “MayaMenu().injectPaths()” line is run. The “MayaMenu()” runs the init which parses all of the XML data (global level, team level and user level). We will talk later about what type of information the various levels contain, but for now know that variables are set up that we will need during the menu build and the rest of the setup. They “injectPaths()” method of the class adds all the necessary sys paths for the entire library from the location that that user is running from (data contained in the user level xml). Following this call we have the entire environment in place, so we import all the modules necessary for the next part of the boot process.
if __name__ == 'main':
print 'Ran Main'
else:
MayaMenu().injectPaths()
from fileIO import soe_path, SOEG_parser
from pymel.tools import loggingControl
from diagnostics.soeLogger import SOE_logger
from soe_maya import mayaCore, mayaMenu
from core import gVarInit
2. The “startModule.MayaMenu().
def startUp(self):
""" Menu Start Up Command """
# Set the logger
self.setLogger(1)
# Spawn Legacy Global Variables
self.setGlobalVars()
self.log.debug('Global Vars Created')
# Inject Variables
# Done Upon instantiation
# Construct Menu Base
self.soegMenu_build()
self.log.debug('SOE Global Menu Built')
# Print Environment
self.soegMenu_envPrinter()
self.log.debug('Environment written to network')
# Copy Local Environemnt Files
self.soegMenu_localFileCopy()
self.log.debug('User Setup and Maya env copied to network')
# Write Metrics File (Basic Info)
self.soegMenu_metrics()
self.log.debug('Metrics written to file')
# Initialize Logger Menu
self.soegMenu_loggerMenuInit()
self.log.debug('Logger menu initialized')
# By default, quiet the quite spammy PyQt logger
qtLog01 = logging.getLogger('PyQt4.uic.properties')
if qtLog01: qtLog01.setLevel(20)
qtLog02 = logging.getLogger('PyQt4.uic.uiparser')
if qtLog02: qtLog02.setLevel(20)
# Start up xPrefs
self.soegMenu_xPrefsInit()
self.log.debug('X Prefs initialized')
# Create the plug in paths and load plug ins (inserted like this due to timing conflict with 3'rd party plug-in)
scriptJob( runOnce=True, event=['idle', self.injectPlugPaths] )
scriptJob( runOnce=True, event=['idle', self.loadPlugIns] )
Short Summary of what we have done up to this point:
- The __init__ portion of the SysGlobalMenu class parses the 3 levels of XML data – Global, Team & User. We’ll talk some more about what we store and how it gets there when we get to the tool that installs and manages our environment.
- The second part following instantiation .injectPaths() runs the injectPaths method after the init has parsed all of our XML data. Here we are simply doing sys.paths.append for the python environment as well as putting needed variables for MAYA_SCRIPT_PATH, MAYA_PLUGIN_PATH, and anything else that you want to add.
- When this first line has run, we import the modules needed for the sysGlobalMenu class from our code library. From this point on, we have the entire environment delivered to the artists which was a major goal (dynamically deliver the environment to the artists without having to have them install anything).
- All of the above happened on imp.load_module. Following that we use the loaded module variable and make the call to the method that runs the rest of what we want the Maya tool delivery system boot up to do, the details of which are outlined in the next section.
The start up call ( MayaMenu().startUp() )
Worth mentioning is that part of the menu construction is building in tool logging in the run time command that gets attached to the menu item, so that each user feeds data about frequency of use. We can then compile that data and display a graph of total tool usage as well as filter to individual users etc. Having all the information as well as the running log on every user on the network has been a huge help! During the history, I showed our MEL run time command, below you see the current command build up and call:
# Build run time command string
rtCommand = "mel.source('%s')\nmel.%s()\nfrom fileIO import SOEG_parser\nreload(SOEG_parser)\nSOEG_parser.ParseSOEG().toolSet('%s')" % (sPath, fName, fPath)
# Make Run Time Command Name
rtName = '%s_%s' % (self.SOEtPad, fName)
# If the run time command doesn't exist
if not runTimeCommand(rtName, q=1, ex=1):
runTimeCommand(rtName, commandLanguage="python", annotation=self.runTimeAnnotation, category=self.SOEprojectName, command=rtCommand)
The “SOEG_parser.ParseSOEG().
def toolSet(self, tool):
""" Increments the tool use by one """
toolRan = soe_path.Path(tool)
toolXMLTitle = toolRan.namebase
if not self.userToolUsageXMLPath.exists():
root = ET.Element("toolUsage")
head = ET.SubElement(root, toolXMLTitle)
head.text = toolRan
head.attrib['runTimes'] = '1'
# Wrap it in an ElementTree instance, and save as XML
tree = ET.ElementTree(root)
tree.write(self.userToolUsageXMLPath)
else:
try:
tUsageXML = ET.parse(self.userToolUsageXMLPath)
except:
#logger.warning('SOEG_parser.ParseSOEG.toolSet() failed to parse %s' % self.userToolUsageXMLPath)
print 'SOEG_parser.ParseSOEG.toolSet() failed to parse %s\nProbably has no data.' %self.userToolUsageXMLPath
return
found = 0
root = tUsageXML.getiterator('toolUsage')[0]
for child in root:
if child.tag == toolXMLTitle:
pRT = int(child.get('runTimes'))
child.set('runTimes', str(pRT+1))
found = 1
if not found:
newItem = ET.SubElement(root, toolXMLTitle)
newItem.text = toolRan
newItem.attrib['runTimes'] = '1'
tUsageXML.write(self.userToolUsageXMLPath)
Menu Core Functionality & Results
Let’s take a look at some of the core features of the built Maya menu. Selecting and changing some of these options will directly modify the underlying XML data which in turn will influence how the code will be evaluated. For example, turning off the P4 flag will flip a switch in a user’s xml data which will turn off P4 calls everywhere in our code libraries. Switching the active team will route save intercepts, scene preferences and the entire environment to the newly picked team.
Installation & Management
To install and manage our pipeline foundation I decided to compile my own program with a data driven XML backend. To data drive certain things is a good idea to minimize recompiles. For UI, I went with PyQt and to interpret our tool usage into meaningful graphs, I used matPlotLib.
Set up to compile
First of all, it is very easy to get set up to compile your own applications. Follow my guide here to get setup to roll your own 32 and 64 bit programs. The beauty of the compiled is that artist’s do not have to install anything locally, Python and all modules used will get rolled into your compiled application.
XML Data Levels
Mentioned a few times is the global, team and user levels of xml data. The global level SOEG.xml file is pre-populated with data for the installer and the pipeline foundation itself. The global and team level is shown below (in the interest for space, I took out all teams but one):
The pipeline installer & manager that we are about to talk about will initially create the user level xml data. Mine is shown below. Going forward we can utilize the userLevel xml data for individual tool settings and preferences where it makes sense.
I wrote a generic parser to parse all levels of xml data. It is shown below:
from fileIO import soe_path
import getpass #@Reimport
from core import gVarInit
import xml.etree.ElementTree as ET
import re
import os
class ParseSOEG(object):
def __init__(self):
""" MayaMenu.__init__(): set initial parameters """
super(ParseSOEG, self).__init__()
self.globalSoegXMLPath = soe_path.Path('%s/installData/SOEG.xml' % gVarInit.remoteLoc)
self.dataLoc = soe_path.Path('%s/data' % gVarInit.remoteLoc)
self.toolTemplates = soe_path.Path('%s/installData/toolTemplates' % gVarInit.remoteLoc)
self.userSoegXMLPath = soe_path.Path('%s/data/%s/SOEG.xml' % (gVarInit.remoteLoc, getpass.getuser()))
self.userToolUsageXMLPath = soe_path.Path('%s/data/%s/toolUsage.xml' % (gVarInit.remoteLoc, getpass.getuser()))
def parseSOEG(self, ignoreEmulation=0):
""" Utility method to parse the XML data spawned from SOE Global """
try:
if self.globalSoegXMLPath.exists():
# Parse the backbone XML
self.globalSoegXML = ET.parse(self.globalSoegXMLPath)
self.globalXMLCore = self.globalSoegXML.getiterator('Core')[0]
if self.userSoegXMLPath.exists():
# Parse the user XML
self.userSoegXML = ET.parse(self.userSoegXMLPath)
self.userXMLCore = self.userSoegXML.getiterator('Core')[0]
if not ignoreEmulation:
# Overload user emulation and set project tool path
if self.userXMLCore.get('Emulate'):
if eval(self.userXMLCore.get('Emulate')):
eUser = self.userXMLCore.get('EmulatedUser')
if eUser and eUser != 'None':
self.userSoegXML = ET.parse('%s/data/%s/SOEG.xml' % (gVarInit.remoteLoc, eUser))
self.userXMLCore = self.userSoegXML.getiterator('Core')[0]
# Parse Active Team XML (if it is there)
self.teamXMLCore = ''
if self.globalSoegXML.getiterator(self.userXMLCore.get('SOEactiveTeam')):
self.teamXMLCore = self.globalSoegXML.getiterator(self.userXMLCore.get('SOEactiveTeam'))[0]
return 1
except:
return 0
@property
def isp4Active(self):
""" Return P4 state """
# parse global xml data
self.parseSOEG()
#added to address cases were self is ParseSOEG and no userXMLCore exsists
try: return eval(self.userXMLCore.get('P4Active'))
except: return 0
def userGet(self, tag, evaluate=0):
""" Utility method to grab values from global user """
if self.userSoegXMLPath.exists():
if re.search('SOEP4', tag):
pRes = self.parseSOEG(1)
else:
pRes = self.parseSOEG()
if pRes:
if self.userXMLCore.get(tag):
if evaluate:
return eval(self.userXMLCore.get(tag))
else:
return self.userXMLCore.get(tag)
else:
return 0
def userSet(self, tag, value):
if self.userSoegXMLPath.exists():
if self.parseSOEG(1):
self.userXMLCore.set(tag, value)
self.userSoegXML.write(self.userSoegXMLPath)
def globalGet(self, tag, evaluate=0):
""" Utility method to grab values from global """
if self.parseSOEG():
if self.globalXMLCore.get(tag):
if evaluate:
return eval(self.globalXMLCore.get(tag))
else:
return self.globalXMLCore.get(tag)
else:
return 0
def teamGet(self, tag, evaluate=0):
""" Utility method to grab values from team """
if self.parseSOEG():
if evaluate:
return eval(self.teamXMLCore.get(tag))
else:
return self.teamXMLCore.get(tag)
else:
return 0
The Pipeline Manager
Here are the goals of our pipeline manager, which directly correlates to our XML data needs by the pipeline foundation, as well as some other functionality that we will go through. The program will show up a little bit different if you are flagged as a developer. Amongst other things, you can spawn team menu’s, as well as emulate any artist’s environment with the click of a button. I will reiterate that the data for many of these UI widgets gets pulled from the core SOEG.xml.
Note the Remote Install Data – Found field. If the SOEG.xml is not found the field unlocks and let you set it from any other path, which makes the whole pipeline portable.
- Team Associations – multiple team associations that, when selected, will show up in the active team drop down menu. Having multiple team associations allows for switching pipeline functionality with the click of a button when jumping between teams. The full list of available company teams (projects) exists as data in the core XML (Global)
- Active Team – Pick the current. Active team
- Menu mode – Where the tools will be run from:
- Local – Users or teams that want to run the tools locally. This option will duplicate the code depot to their location of choice.
- Perforce – Teams that run from Perforce has team XML data that specifies the root of this path so clicking that option in the UI will autofill in the path. We have P4 branch spec to push from source depot to P4 release locations. I will go through specifically how we work and deploy our code later on in this article.
- Remote – Runs straight from shared network drive. We currently use MS Sync Toy to push from source depot to the network location.
- Developer – Runs from source depot (will not show up as an option to non-developers)
- Tool Path – Shows the root tool path depending on their choice above. For P4, and Remote modes, this field locks since those paths are predefined in the various team level XML data.
- P4 Settings – User Name, Workspace name, and server (server is autofilled if you pick a team that has provided that information through the XML).
- Maya Version – The pipeline supports 2011+. This list gets auto populated using the date time python module. Takes the current year and adds 1. So when Janyary 2013 comes, Maya 2014 will show up….
- Maya Bit Version – As soon as the Maya version is selected, the docs/Maya folder will be analyzed to see what bit version exists and auto set this drop down. A user can override the auto set bit depth, if they have both versions and want the pipeline installed for 32 bit (for whatever reason ).
- Maya userSetup – When the Maya version is set, the tool will auto set the path to the userSetup.mel (if it doesn’t exist, it will be created). Behind the scenes, the same thing will be done for the Maya.env file with the help of the path class (‘%s/Maya.env’ % userSetupPath.parent). We also check that neither of these files are read only since we will be modifying them.
- Install & Uninstall – Sets up the files needs for the art pipeline foundation. In detail it:
- Inserts the PyMel and PyQt paths to the Maya.env
- Inserts the one call to mayaMenuBoot.py in userSetup.mel
- Creates the mayaMenuBoot.py file in the user’s data directory (network). We do this by copying a template file of the python script. For the variables that will change per user we insert keywords that is then replaced and in the end we get a working script file. This trick works really great and you can even apply it on .ma Maya files. We have a tool that generates assets from templates, and you can for instance change a path to a reference to a datafied path and then just find and replace these data key words (same goes for node names, texture paths etc.). I like to use the @keyword@ syntax in the template code.
- Creates the user XML data file that is used for user level data, which is populated with information from the install UI.
The section in the image below will initially not be visible, but once the pipeline foundation has been installed, it will be revealed. It contains a set of really handy utilities.
- Emulate – Drop down list with all users that have the foundational pileline installed. Since all of our data is on the network (copy of userSetup and Maya.env included), selecting a user will make a back up of your environment and replace it with theirs. You can easily step into their shoes so to speak, selecting “Emulate” again will copy back your environment. This utility feature only shows up for developers.
- Tool Usage – This is another feature that is only exposed to devs. It pops up a window that interprets all user’s tool usage XML data into a mat plot lib chart. That can be filtered in a number of ways. It allows you to see all usage, filter out devs, see single user’s data only etc.
- Fix Local – This utility copies the network copy of either or both the userSetup.mel and Maya.env files back to a user’s local machine. This is handy when you want to tweak something and can tell the user to simply open the program and click that button to update the local files.
- Team Tool Box Root – Root path to the tool box. The team toolbox is always Perforced.
- Team Tool Box Relative Loc – The folder relative to root, that Maya will build the tool delivery system from.
- Team Module Relative Loc – This is the location of the soon to be created team menu python module as well as the team menu boot python file.
- Team Menu Title – Maya menu title. behind the scenes, this name is “made legal” (spaces and non legal characters removed) and used as the variable name for the menu.
- Team pad – Since the structure is very similar to the global tool box, and a user can run multiple team menus at the same time, this name is padded to internal variables of the menu.
- Dig URL – This optional path will add a tool request option menu entry. Clicking that will take you to a dig it style web page where users can submit tool ideas and other users can dig them or give comments and feedback.
- Make Folder Structure – This option check box will spawn an empty folder structure identical to that of the global code depot. This way, the team code structure and organization will be identical to that of the global one, which the tech artists are already used to.
- Team Drop Down – Specify which team the tool box is made/modified/installed for.
- Install/Edit – Creates the team tool box. The steps of the creation process are outlined below:
Section 4-9 is fully visible to devs and partially visible to artists. It enables devs to spawn team menus. This is done through code templates that contain keywords that are searched and replaced by the variables they fill in. Above is a listing of the data we have to provide to spawn a team menu.
The team menu launches the same way as the global one (userSetup.mel – teamMenuBoot.py – teamMenu.py), the only difference is that teamMenuBoot.py is not unique per user but unique per team and therefore Perforced.
- teamMenuBoot.py is created from a code template and the keyword “@team@” is replaced by the team name variable created.
- teamNameTB.xml is created and, similar to the global pipeline, contains the data that is needed for the team tool delivery system. It is created on the network in the same spot as the global XML file so that the global tools can be aware of all created tool boxes, as well as necessary data needed to unify the preferences.
- teamMenu.py is created from a template where keywords are replaced by data extracted from the data we got from the pipeline manager. This file is Perforced in the same place as teamMenuBoot.
- A teamSave.mel, and a teamScene.mel, which we will cover below.
Functionally, the two most important features of the team toolbox is teamSave and teamScene. They represent two types of intercepts that we use all the time. Logically enough teamSave is a code hook at the time an artist saves a scene and teamScene is at any time an artist open a scene.
To Implement Team Save
In our global tool depot we have an extraScripts/overload/ directory. In this directory, we store a Mel file that contains copies of the following procs from Maya necessary to cover all the diffent ways you can save out of Maya. These procs are then overloaded as part of the Maya menu construction:
- global proc int pv_performAction
- global proc FileMenu_SaveItem
There is one such file for each Maya version and we follow this naming convention, but it really doesn’t matter what you call the file – “SOE_saveOverload_2012.mel”. You can then figure out which Maya version is starting using the about command, and source the correct version file. You want to be safe and implement a new file for every Maya version.
These copies have a teamSave; call inserted. To see where, you can download the file here and take a look at it yourself (just search the file for “teamSave”).
To then create the save intercept, the team menu contains a source statement of this file which will then overload the default Maya behavior. Following the overload which calls a teamSave command, we have to make sure we also source the team’s teamSave.mel file which was created on install and exists in a known location.
To implement the teamScene (open intercept)
We will source the teamScene.mel file and then create a script job on event “sceneOpen” and “newSceneOpened” with the teamScene command.
It is then really up to each team to write the code and fill in what they want to do on the save and open intercepts, but it is a great time to run verification of various things, maintain meta systems inside the Maya scene, but those are topics for a later date perhaps (this post is getting long enough ).
Global Pipeline Foundation Summary
That concludes the implementation of our global delivery system and pipeline foundation, as well as covers the creation of the supplementary team delivery systems and pipeline foundations. This is the result of years and years of iteration and abstraction. What I find now is that most of the code we write goes into our global tool depot. The team code depot, gets a few things that are very unique to the various teams. When writing team pipelines, we piece together code from the global depot onto user interfaces and some team specific code that lives in the team depot. Thanks to the ability to spawn a team delivery system and pipeline foundation with save intercept hooks, scene preference hooks, Perforce integration and logging built in, it is really fast to get up and running.
I have not yet talked about the thing that really enables us to write the majority of our code to the global depot; the abstraction of naming conventions and folder structures. Let’s talk about that one briefly, followed by some thoughts about using key-worded templates, as well as our deployment workflow.
Abstraction of folder structures and naming conventions
There are a few ways to go about doing this and there really isn’t a right or wrong way. I find that when you abstract things far enough, you often arrive at a schema and a schema parsing sollution. This was the sollution for us and how we abstracted all folder structures and naming conventions. The schema is an XML file that each team has (teamName.xml) and contains all data driven information about the team specific pipelines, folder structures and naming convations in one single location; the schemaParser is what interprets that schema. The schema parser is spaghetti land and has tons of “if team is…” and needs for all teams relegated into one class. It is way better to have code like that in one place and that enables the entire code depot to be very clean and allows us to write code that can act on multiple teams at all….it sort of acts like the glue that ties all teams together into the SOEGlobal framework.
Not everything uses the schema, as an example case here let’s take a look at a team shared exporter, that should know where to export a certain file to. The spaghetti followed by the call. Another positive thing is that all things important to the folder structure and naming conventions get relegated into a single place so it is very easy to find all of it when rolling a new project.
elif self.team == gVarInit.deepPS2:
# mobu
if self.mobu:
wieldType = self.takeName.split('_')[1]
self.exportPath = Path('%s/%s/%s.xmd' % (self.filePath.replace('MoBu','Morpheme').replace('AnimationsForExport','Animations'), wieldType, self.takeName))
if not wFile: self.exportPath = self.exportPath.parent
return self.exportPath.makePretty()
# maya
else:
if re.search('Animations', self.filePath):
self.exportFileName = self.filePath.basename().replace(".ma", ".xmd").replace(".mb",".xmd").replace(".fbx", ".xmd")
rigPath = self.filePath[:self.filePath.rfind('Maya')]
weapon = self.exportFileName.split('_')[1]
self.exportPath = Path('%s/Morpheme/Animations/%s/%s' % (rigPath, weapon, self.exportFileName))
else:
self.exportPath = Path(self.filePath.replace(".ma", ".gr2").replace(".mb", "gr2"))
if not wFile: self.exportPath = self.exportPath.parent
return self.exportPath.makePretty()
expObj = CA_schemaParser.ExportParse()
self.filePath = expObj.getExport()
Asset Hub – A better way to set up assets
This is another example of how to use schema and a schema parser to automate naming conventions and folder structures and set them in stone, as well as a nice way to use keyworded templates for various files.
If there are category similarities between many teams within your company, with slightly different needs and definitions (environments, characters, playerCharacters(PC), nonPlayerCharacters (npc), etc etc) , you may be able to abstract that into a keyWord parsing solution. Even if there isn’t many similarities this is very doable on a team by team basis and it is very worthwhile doing.
When you have done that, there is no reason why you can’t make a compiled program that let’s the user define (name) these keywords and your program can take that information and create all the files needed for that asset. Set up templates, including Maya file(s), texture templates, source art files for each asset type and spawn away.
We set up the Maya scenes with a stand in asset with (same) key worded paths for textures, shader and node names then parse and key word change all of it and you will end up with a correctly set up Maya scene. The artists can then use your program to create all the assets and get a change list with everything created. They will just then fill in the art and they will thank you for it.
Everyone will be happier for it down the line since this enforces the naming conventions and folder structures for everything. All tech artists know those moments where that is put to the test down the line in the development of a game.
The below schema section gets parsed in the Asset Hub code, a the UI in the image gets created from it.
The Tech Artist Workflow
As explained earlier we have a Perforce depot, “SOEGlobal” that we develop out of. This is also the environment for the tech artists most of the time. We check out files and develop pipelines and test them in this environment. When you think your code is stable enough, we check in and let the code be banged on by the developer group. You can also let your art “beta testers” run this environment.
When deemed worthy of production, we push the code to another folder on the same server, “SOEGlobal_release”, through a branch spec. We then run a Bandit script to sync “SOEGlobal_release” across to multiple servers (teams). The other servers have a “vendor branch”, i.e the tools do not go straight into a team’s live environment. The other team’s servers have another branch spec that allows each team’s tech artist to diff and deploy whichever tools he wants into the live environment. If a team is without a tech artist at any point, it is possible to map the team’s workspace to take the vendor branch straight into the live environment.
End Of Mind Share
That concludes the recap of my Master Class in Paris, France, May 2011, my mind share and thoughts on a global pipeline foundation or framework. I feel incredibly fortunate to have been given the time and opportunity to iterate our pipeline foundation to where it is today I have learned tons along the way. I am fairly sure that is is a never ending process but I feel that what we have in place now is a strong sollution and the machinery is running smoothly. I did notice that this very topic was trending at this year’s GDC in the tech art community, which I thought was interesting. We are a hive mind .
Feel free to share any thoughts and/or questions below.
Kudos to Jason Parks, Jon Rohland, and Martin Karlsson for being part of this ride!
/Christian Akesson
Leave a Reply
Want to join the discussion?Feel free to contribute!
You did it. The mother of all pipelines. Congratulations.
The industry is better off for it.
Maybe we’ll announce our Open Source version of this soon?
Great setup… and great job breaking this down so well.
Thanks, I appreciate that.
Christian, this is a great and massive explanation on how to arrieve to a good solution for a modern and dynamic pipeline.
I have developed solutions for many pipelines and teams specific tools but I guess this explanation will take me long time to understand deeply and I am pretty sure this will always be my main knowledge base of great ideas.
Thanks a ton.
P.S. It’s incredible you not only described how to do it in general and specific terms, you makes me start to like Python !!!!
🙂 Glad to hear it Gerardo. I promise that not only will you come to love Python, but there will be a time where you look back at MEL code and it will physically make you cringe 🙂 .
I was deep into MEL myself and I really loved writing in it. Now, I could never go back to MEL again (unless I had a gun to my head 🙂 )
Hey Christian, great page! Fairly recently, I’ve had to jump back into pipeline and tools development with Maya. Your page has been a major resource, an invaluable wealth of information. SOE definitely have the man for the job!
Nice to see that you’ve discovered and embraced OOP, as well. Python is a fun, well structured, and very capable language, and certainly far easier to master than C++. Now that Maya officially supports .NET plug-ins, maybe you’ll get into C#, or even the awesome F# (my new favorite language). 😉
Cheers, my friend!
Thanks Milo. The evolution of this thing will likely never stop, and I will write another article when the next evolutionary step is in place 🙂 .
C# is most definitely in my near future.
Cheers to you buddy,
/Christian