CUCC Expedition Handbook

Troggle - a kinder simpler troggle?

Troggle runs much of the the cave survey data management, presents the data on the website and manages the Expo Handbook.

This part of the handbook is intended for people maintaining the troggle software. Day to day cave recording and surveying tasks are documented in the expo "survey handbook"

stroggle

At one time Martin Green attempted to reimplement troggle as "stroggle" using flask instead of Django at git@gitorious.org:stroggle/stroggle.git (but gitorious has been deleted).

A copy of this project is archived by Wookey on wookware.org/software/cavearchive/stroggle/.

(but maybe is only simpler when starting a new project and doesn't scale to complexity the way Django does?)

There is also a copy of stroggle on the backed-up, read-only copy of gitorious on "gitorious valhalla"
stroggle code
stroggle-gitorious-wiki
but note that this domain has an expired certificate so https:// complains.

The schema for stroggle is a schema.json file. See the comparable troggle schema file which is indeed horrendously bigger.

Radost's proposal

Radost Waszkiewicz (CUCC member 2016-2019) proposed a plan for superceding troggle

Hey,
on the design sesh we've looked a bit on the way data is organised in the loser repo and how to access it via troggle.

A proposal arised that all this database shannaingans is essentially unnecessary - we have about 200 caves, about 250 entrances, about 200 people and couple dozen expos. We don't need efficient lookups at all. We can write something which will be 'slow' and make only things we actually care about.

[What Rad has misunderstood here is that the database is not for speed. We use it mostly so that we can manage 'referential integrity' i.e. have all the different bits of information match up correctly. While the total size of the data is small, the interrelationships and complexity is quite large. From the justification for troggle:

"A less obvious but more deeply rooted problem was the lack of relational information. One table named folk.csv stored names of all expedition members, the years in which they were present, and a link to a biography page. This was great for displaying a table of members by expedition year, but what if you wanted to display a list of people who wrote in the logbook about a certain cave in a certain expedition year? Theoretically, all of the necessary information to produce that list has been recorded in the logbook, but there is no way to access it because there is no connection between the person's name in folk.csv and the entries he wrote in the logbook". [Aaron Curtis]

And for ensuring survey data does not get lost we need to coordinate people, trips, survex blocks, survex files, drawing files (several formats), QMs, wallet-progress pages, rigging guides, entrance photos, GPS tracks, kataster boundaries, scans of sketches, scans of underground notes, and dates for all those - Philip Sargent]

Similarly I see little gain from doing the html - python himera template pages. These contain mainly nested for loops which could just as well be written in e.g. python.

[He could indeed. But for most people producing HTML while writing in python is just unnecessarily difficult. But it has to be said that the django HTML templating mechanism is sufficiently powerful that it does almost amount to an additional language to learn.

Troggle has 66 different url recognisers and there are 71 HTML django template files which the recognisers direct to. Not all page templates are currently used but still some kind of templating system would seem to be probably necessary for sanity and maintenance self-documentation.

The django system is sufficiently well-thought-of that it forms the basis for the framework-independent templating engine Jinja - and that site has a good discussion on whether templating is a good thing or not. There are about 20 different python template engines.]

I'd advocate following solution:
[A reasonable proposal, but needs quantifying with all the things troggle does which Rad was unaware of. This will not be a "small number" but it needs estimating. We don't need everything troggle does for us of course, but that doesn't mean that removing django/troggle will reduce the total amount of code. The input data parsers will be nearly the same size obviously.

He is actually proposing building a shantytown 'built from common, inexpensive materials and simple tools. Shantytowns can be built using relatively unskilled labor. Even though the labor force is "unskilled" in the customary sense, the construction and maintenance of this sort of housing can be quite labor intensive. There is little specialization. Each housing unit is constructed and maintained primarily by its inhabitants, and each inhabitant must be a jack of all the necessary trades. There is little concern for infrastructure, since infrastructure requires coordination and capital, and specialized resources, equipment, and skills. There is little overall planning or regulation of growth.']


Why do this:
[This vastly underestimates the number of things that troggle does for us. See " Troggle: a revised system for cave data management".] And a VM is not required to run and debug troggle. Sam has produced a docker variant which he uses extensively and I run it directly on local WSL/Ubuntu in Windows10.

Troggle today has 6,400 non-comment lines of python and 2,500 non-comment lines of django HTML template code. Plus there is the integration with the in-browser HTML editor in JavaScript. Half of the python is in the parsers which will not change whatever we do. Django itself is much, much bigger and includes all the security middleware necessary in the web today.

But maintaining the code with the regular Django updates is a heavy job.

"the horrifying url rewrites that correspond to no files" were bugs introduced by people who edited troggle without knowing what they were doing. We now have a test suite and these have all been fixed.

Troggle is now packaged such that it can run entirely on a standalone laptop and re-loads from scratch in 2 minutes, not 5 hours. So if one has a microSD card with 40GB of historical scanned images and photos, it will run on any Windows or Linux laptop. Even at top camp. ]

How much work would this actually take:
[The effort estimate is similarly a gross underestimate because (a) he assumes one script per page of output, forgetting all the core work to create a central consistent dataset, and (b) he is missing out most of the functionality we use without realizing it because it is built into django's SQL system, such as multi-user operations.

Eventually we will have to migrate from django of course, as it will eventually fail to keep up with the rest of the world. Right now we need to get ourselves onto python3 so that we can use an LTS release which has current security updates. This is more urgent for django than for Linux. In Ubuntu terms we are on 18.04 LTS (Debian 10) which has no free maintenance updates from 2023. We should plan to migrate troggle from django to another framework in about 2025. See stroggle below.]

Things this [Rad's] solution doesn't solve: Rad
[Creating a cave description for a new cave, and especially linking in images, is currently so difficult that only a couple of people can do it. Fixing this is a matter of urgency. No one should have to imagine where the path to a file will be but isn't now. We need a file uploading system to put things in the right place; and this would help photos too.]

Return to: Troggle design and future implementations
Return to: Troggle intro
Troggle index: Index of all troggle documents