CUCC Expedition Handbook

Handbook Troggle - Automated Testing

Troggle Automated Testing

We have a suite of more than 100 smoke tests.

These are 'end to end' tests which very quickly show whether something is badly broken. The tests are for two purposes only:

This is surprisingly effective. Django produces excellently detailed tracebacks when an fault happens, which allow us to home in on the precise part of the code which has been broken by a version upgrade.

We do also have a handful of unit tests which just poke data into the database and check that it can be read out again.

The test code is all in troggle/core/TESTS/.

Running the tests

The tests are run manually by troggle programmers like this:
 troggle$ python3 manage.py test --parallel auto -v 1
or, if someone has made a mistake and the tests interfere with each other:
 troggle$ python manage.py test -v 1

Running the tests in parallel should work on the server too (without the 'auto' keyword on Django 3.2 though) but they fail with the message

   (1044, "Access denied for user 'expo'@'localhost' to database 'test_troggle_1'")

On the server, running them sequentially (not parallel) is still quite quick:

   Ran 104 tests in 21.944s

Example test

The test 'test_page_expofile' checks that a particular PDF is being served correctly by the web server and that the resulting page is the correct length of 2,299,270 bytes:


    def test_page_expofile(self):
        # Flat file tests.
        response = self.client.get('/expofiles/documents/surveying/tunnel-loefflerCP35-only.pdf')
        self.assertEqual(response.status_code, 200)
        self.assertEqual(len(response.content), 2299270)

Django test system

This test suite uses the the django test system. One of the things this does is to ensure that all the settings are imported correctly and makes it easy to specify a test as an input URL and expected HTML output using a Django object text client. It sets up a very fast in-memory sqlite database purely for tests. No tests are run with the real expo database.

Troggle tests

The tests can be run at a more verbose level by setting the -v 3 flag.

As yet we have no test database set up, so the in-memory database starts entirely empty. However we have 'fixtures' in troggle/core/fixtures/ which are JSON files containing dummy data which is read in before a few of the tests.

Current wisdom is that factory methods in the test suite are a superior way of managing tests for very long-term projects like ours. We have one of these make_person() in core/TESTS/test_parsers.py which we use to create 4 people, which are then used when testing the import parser for a fragment of an invented logbook in test_logbook_parse().

How you can help

We could do with a lot more unit tests which test small, specific things. If we have a lot of these it will make future re-engineering of troggle easier, as we can more confidently tackle big re-writes and still be sure that nothing is broken.

We have got only one test which checks that the input parsers work. We need tests for parsing survex files and for reading the JSON files for the wallets. We could laos do with a directory browser/parser test for the survey scan files and for the HTML fragment file which make up the cave descriptions.

Have a look at Wikpedia's review of types of software testing for ideas.

If you want to write some tests and are having trouble finding something which is untested, have a look at the list of url paths in the routing system in troggle/urls.py and look for types of url which do not appear in the test suite checks.


Go on to: Troggle architecture
Return to: Troggle intro
Troggle index: Index of all troggle documents