There are several different practices and scopes of software testing but I will focus on testing during development. If you work in a very small project or alone on a piece of software, it could be easier to test your software by click around on the ui and check if it works how it should. But if things go a little bit bigger this style of testing produces some huge problems:
- Deploying your software and click around on the gui takes a lot of time.
- There are no regressions, if your feature works and you work on the next one, the first would be broken and if you forget to try it out again your software fail.
- There is no possibility to let the tests executed by some continuous integration server like Jenkins for example.
So if your code reaches a couple of lines, high quality work without testing would be nearly impossible or takes a lot of time until it works and if you change it a little bit it could end up in a mess very fast. But when you start this looks very simple and straight forward. It’s a little bit like a jenga tower. You build it up and checked everything. Let’s imagine it works and your customer is satisfied, but what if he wants some minor changes. Every change is like pull out a piece of the tower. If you change it enough times, he will fall down. So therefore you need something reliable which guarantees you, that your changes only produces the result your customer wants and not changing anything else. But how could you deal with it, if you have a lot of pressure and your schedule is very tight?
First at all I like to give you some cons of testing:
- Testing in any case does not produce high quality code.
- Testing is no guarantee, it more like a safety belt.
- Testing takes time, to write the tests.
- Testing affects your software design.
- The result of testing depends on your skills.
So what are the pros?
- Tests are fast and accurate.
- Tests are reproducible.
- Tests could be executed by a machine e.g. continuous integration server
- Tests allow you to work together on the same piece of code without knowing every feature in detail.
So fare nothing new, I guess. But the question for me is how you could introduce tests, to get as much of the pros as possible without losing to much with the cons. To illustrate it lets make an example. The first extreme is having no tests, you just have to write down your code and that’s it. The other extreme is having tested everything. This could end up in a huge effort do have everything tested – and unfortunately you still could fail with your tests, so tests are no 100% guarantee for working software. But how could we find a good solution to have minimum effort and maximum value. Let’s have a look at some various types of testing your software.
As you design your software from top down, you also could write your tests top down. So very high level your software looks like this:
Your software has some input this could be user input or file input or whatever and creates some output with it. That’s all your software is doing, if you look on it from very far. Or we could say it like this:
output = f(input)
This means that your software f creates an output from a given input. So the question that your test has to prove does your software works correct in any cases. Therefore you could create a brute force test and simulate all possible input values and check the output. Theoretical this is possible but very inefficient. Also another problem is, that the output sometimes is very difficult to get because it’s written to a file or send away by e message. Or if you have stateful software it depends on the former state. Also there could be variation over time and so on and so on. So this idea of input and output is nice, but not so easy in practice. Also of you have various interfaces in your application it important check not only the expected result. Check also that nothing else is changed, you do not expect. Think back on the tower. The expected result is the piece in your hand, if you pull it out. But the not expected result is that the tower remains intact and does not burst in flames. So if you want to test it you have to check both –expected and not expected. This is a little bit difficult, because changes could be everywhere and you have to watch you code carefully and take care on dependencies a lot.
One popular testing method is unit testing combined with a unit test framework (junit for example) and maybe a mocking framework (mockito or something else). But the question for me is, how will you test your software. Let’s imagine you have a linear dependency of five software modules in this case some classes.
The first thing is to write a unit test for every piece – so called white box tests. So at the end you have five classes and five unit tests for the classes. But the question is, if that is enough? Does it prove that your software works correct? No it does not. Cause the input for Class B is the simulated result of Class A. Cause your unit test for class B does nothing else than simulate the result of class A. So mathematical this looks like this:
output = e(d(c(b(a(input))))) = f(input)
So as you can see, your function f is split into several functions a,b,c,d,e. To make it sure that everything works you have to do another test. You test them in combination, starting very small by testing A and B together, B and C and so on. Okay, in that case you have to simulate the input for A and check the result of B for example.
So now you have another four test classes. In total up to now we have
- Five classes under test
- Nine test cases for them
But this is not enough, cause what if the simulated input for Class C is different than the real reaction of A and B? So we need another test. Now we test three classes together.
Test A,B,C and test B,C,D and test C,D,E another three tests and also no 100% reliable result. At the end you get: 5+4+3+2+1 = 15 tests. So we write 15 test cases to make sure everything is working correctly together. And this is only the case for simple linear dependencies. Okay but we want your software working correctly and the customer satisfied. We write all tests, everything is finding the customer happy but – there come some small feature request during presentation. Only some small changes in Class B no big thing. But how much tests have to be changed? Do you IDE help you? Only if the interface between A and B or B and C changes, but not if the internal function is different. So at the end you have to change the Test for B alone – this is straight forward. But you also have to change the input for Test C, because B has changed. Also you have to change the Test A & B – also straight forward, but do not forget about Test B & C. And also change test A & B & C and also keep C & D & E in mind. And so on. At the end you have to change around 7 tests for this minor change. This could be a lot of work and maybe your boss does not understand why finishing the small change request takes so much time – and time is money. So what to do? Having fewer tests? No good idea – you lose quality. But managing so much tests takes a lot of time. Not think only about small changes, think about interface changes between A and B. The risk of letting the tower crash is not do disregard. But what could be done instead?
Let’s have a more detailed look on the tests we produce. There are a lot of duplicates and copies of functionality (which means not code). Cause you copy the functionality of B in the test of C, C&D, C&D&E and so on. It’s always the same input but different output. And think back on the input output function. If you look from the top, it’s just one function. So if you test the function, you also have tested everything inside. So for the example we change everything and switch from the 15 test cases to one single test, by testing A&B&C&D&E together. If the customer now wants a change, you only have to check the output and that’s it. But the more classes or modules you test together the more complicated the test cases get cause there is more functionality in between. So you do not have to change much tests but the tests are getting much bigger. So there is another mess, cause huge tests are more vulnerable on making mistakes. So what’s the solution?
In my personal opinion it looks like this:
- Watch very carefully on your software design and architecture.
- Keep an eye on dependencies. They will cause complexity which increases the chance to make mistakes.
Balance your test scope carefully depending on the complexity of your code
- I prefer small modules around 2000-3000 LoC and use blackbox tests for them.
- Difficult code – algorithms e.g – should be tested separately with whitebox tests.
- Try to write simple code – „Simplicity–the art of maximizing the amount of work not done–is essential.“ (agile manifesto)
- Try to work stateless and avoid multithreading when possible.
- Think in functions and let yourself getting inspired by functional programming
- Do not try to be trendy – try to be professional even if it’s not trendy.
- Keep a close eye on the customer value of everything you do – even tests.
- Keep your productive code strictly separated from test code.
So i hope you got some inspiring input – also like to get some constructive feedback and some experience you made.