CUCC Expedition Handbook

Troggle Architecture

Troggle overall architecture

Troggle is made up of three large chunks:

Troggle data architecture

The core of the troggle software is the data architecture: the set of tables into which all the cave survey and expo data is poured and stored. These tables are what enables us to produce a large number of different but consistent reports and views.


Tables (Objects) - wildly out of date

Packages

Also there have been tables added to the core representation which are not included in this old diagram, e.g. Scannedimage, Survexdirectory, Survexscansfolder, Survexscansingle, Tunnelfile, TunnelfileSurvexscansfolders, Survey. See Troggle data model python code (15 May 2020) and click on the Class Diagram below on the right.


Class Diagram
(Click to enlarge)

Purpose

The reasons why we have an online system at all are described in our website history.

There is an introductory article "Troggle: a revised system for cave data management".

Implementation in software

Troggle is written in Python (over 13,000 lines excluding comments) and is built on the Django framework (nearly 4,000 lines of ~HTML, excluding comments).

Before starting to work on Troggle it might be a good idea to run through an initial install and exploration of a tutorial Django project to get the Django concepts bedded down - which are not at all obvious and some exist only within Django.

Django is the thing that puts the survey data in a database in a way that helps us write far less code to get it in and out again, and provides templates which make it quicker to maintain all the webpages. Powered by Django. See the django design philosophy for why we chose it: while django comes with a full stack (db, request/response, URL mapping, HTML templates) the layers of the stack are independent and individually replaceable.

We have to keep up to date with new rleases of django, see Upgrading Django for Troggle.

Troggle parsers and input files


Django server and webpage (client)

To understand how troggle imports the data from the survex files, tunnel files, logbooks etc., see the troggle import (databaseReset.py) documentation.

The following separate import operations are managed by the import utility (databaseReset.py):

Files generated by troggle

There are only three places where this happens. This is where online forms are used to create cave entrance records and cave records, where a form is used to record the information about a wallet. These are created in the database but also exported as files so that when troggle is rebuilt and data reimported the new cave data is there.

Any page in this handbook, and any logbook entry, can also be edited online and the page is saved as a file - and registered with the version control system.


Internet meme

See: Troggle data model in python code
Return to: Troggle intro
Troggle index: Index of all troggle documents