That's a logical consequence of any unit testing discipline.Įnumerate every module. "guarantees that when we change something we can spot unintentional changes to other sub-systems"
#Figleaf python code
You haven't asked about line of code coverage or logic path coverage, which is a good thing. Validates the correctness of every method of every class of every module. Validates the correctness of every class of every module. Validates the correctness of every module. You only mention Python, so I'll assume the entire project is pure Python. But correctness is defined, and we can assign a number of alternatives to component.
![figleaf python figleaf python](http://www.intriguing.com/mp/_pictures/compdiff/hellgr1.jpg)
"validates the correctness of every component in our system" That's okay, however, since it gets better further on. "every single part of our project has a meaningful test in place" Often the developer's intent is not immediatly obvious, hence the need to research and clarify anything which looks vague and then enshrine this knowledge in a unit-test which makes the original intent quite explicit. We do not want to simply reproduce the regtests in pure python. " Meaningful" tests (to me) are ones which validate that the function does what the developer originally intended. a single type of financial option or a financial model used by many types of option). It might even refer to a single financial concept (e.g. " Component" is intentionally vague: Sometime it will be a class, other time an entire module or assembly of modules. Other developers will be taking responsibility for non Python components of the project (which are also outside of scope). The object of this exercise is to plug any gaps in our python unittest suite (only) suite so that every component has some degree of unittest coverage. We also have a very stable continuous integration platform (built with Hudson) which is designed to split-up and run standard python unit-tests across our test facility: Approx 20 PCs built to the company spec.
![figleaf python figleaf python](https://i.pinimg.com/originals/d4/d4/c7/d4d4c7c646280d3fd0ac49d618cb2ba4.jpg)
Just to clarify: We have a very comprehensive unit-test suite (plus a seperate and very comprehensive regtest suite which is outside the scope of this exercise). So how can I go about running this kind of analysis. If I knew which parts of my system were untested then I could set about developing new tests which would eventually approach towards my goal of complete test-coverage.
![figleaf python figleaf python](https://www.intriguing.com/mp/_pictures/compdiff/spotinlo.jpg)
I need to find out which parts of our code are not covered by any of our unit-tests. That's obviously going to be quite a lot of work! It could take years, but for this kind of project it's worth it. Preferably they want something which validates the correctness of every component in our system. At the very least they want a system which guarantees that when we change something we can spot unintentional changes to other sub-systems. Recently our HQ have insisted that we verify that every single part of our project has a meaningful test in place. I manage the testing for a very large financial pricing system.