Which do you prefer:
def basepart1(filename): idx = 1 test_substring = filename[idx*-1] while (test_substring.isnumeric()): idx = idx+1 test_substring = filename[idx*-1] return filename[0:len(filename)-idx+1] def basepart2(filename): return filename.rstrip('0123456789') def basepart3(filename): return re.sub("\d*$", "", filename) def basepart4(filename): filename = unicode(filename) idx = 1 test_substring = filename[idx*-1] while (test_substring.isnumeric()): idx += 1 test_substring = filename[idx*-1] return str(filename[:-idx+1])
This question illustrates for me why programmers and sysads are so notoriously fussy and argumentative and occasionally just difficult. A little background will help: the four different definitions above are based on Python code I came across “in the wild”. A particular system creates a number of files with names like
budget77. In these examples, “1January”, “customerA”, and “budget” are the “bases” or “roots” that have a little human meaning, while the numeric prefixes are artifacts of a particular algorithm used for related processing.
basepart1 is a variation on what a newcomer to this area produced.
When programmers talk about “elegance” and “idiomatic style”, they have in mind something like
basepart3: one line does the work of six! We don’t do this just for recreation, like trying to reduce ones golf score; the more succinct forms are far more expressive and easier to maintain. All the definitions do (roughly–the differences are an interesting topic for another time) the same thing, and, in a business sense, they’re therefore all equally valuable. This latter judgment drives computer people wild: they know that some day a manager will come through and announce that the rules for filename formation have been slightly modified. On that day, it might take an hour, or even two, to update
basepart3 can probably be enhanced accurately in seconds. To equate
basepart2 because they do the same things frustrates computer people in much the way it grates on a trained musician to lump the work of Justin Bieber and Mozart together as “songs”.
This little example also illustrates for me characteristic difficulties of APM (application performance management) selection and use. Even when an organization has a clear set of requirements for APM, and constructs tests to exercise specific products against those requirements, interpretation of the results is a huge challenge. The details of implementation matter for a product as wide-ranging and intensively-used as APM. If a particular variety of APM diagnoses a specific test failure correctly in a test environment, what does that say about its reliability when dealing with billions of Production events generated by monitored applications? When a demonstration installation models a test configuration adequately, how much does that tell us about its ability to extend to the organization’s full real-time process table? Two programs that behave the same in a test situation might differ on the inside as much as the six- and one-line functions above.
One conclusion to draw from the difficulty of these questions is that APM decisions need to be team efforts: they demand mature judgment based both on the strategics of business needs, as well as technical compatibility with the infrastructure and engineering culture of the organization.