Practical use of troggle quickly shows up serious shortcomings:
Updated: 5 September 2023
Mostly these are not just a problem just with troggle, but things where expo procedures have never been properly created or applied, or where there was a procedure but it fell out of use because it wasn't easy enough to persist with.
This is abolutely crap. We have a manual, undocumented, probably incomplete offline set of scripts which produced a GPX file in 2019 and again in 2023 (Mark Shinwell), which we have no easy way of delivering to a mobile device. The troggle pages for the cave and entrance descriptions do not have any location information on them, as it is generated quite separately from the survex files and fixed-points. And it is all in Austrian grid coordinates not WGS84 too.
We have no procedure whatsoever to capture photos of cave entrances and to get them associated witht he entrance data in troggle.
Our former prospecting guide has died of terminal obsolescence.
It should be perfectly possible to convert the entrance data as listed in the second table on the troggle 'ents' page into WGS84 and display it in the entrance description pages.
We really want a prospecting map showing entrances which are clickable, so that the entrance description pops up.
Our procedures for 'tag', 'exact tag', and 'other tag' are undocumented and need to be redone.
See the current ugly situation.
Strangely, we have no process at all to allow anyone to download the archived Tunnel or Therion XML files and also download the referenced source scan files at the same time so that the references within the XML files actually work.
The XML files contain cross-reference links to the scan files on the computer the tunnelling/therioning was done which is different for every machine as we have no recommended standard setup.
We have no procedure for this. And also no proper procedures (or even agreed single final location) for rigging topos either. We have a bucket folder for final drawn-up surveys on expofiles.
The 2019 expo was documented on two different blogs. One of these is UK caving, and for one post the caver used an ephemeral location for the illutrating photographs - which were very good. [We now have a documented, but convoluted, procedure for managing this, see log-blog-parsing.]
We seem to have lost the expo twitter account credentials.
We have an expo Facebook account (deprecated as many Expo users hate FB)
We used to use Slack and (in 2022) use Trello for expo organisation. We have moved from Slack to Element, which we can archive ourseleves, and maybe we can use Kanboard (ditto) next year instead of Trello.
Since 2022 we have a working online form to create or edit the cave and entrance descriptions. But the URL namespace of the form is different from where the cave description will be actually published, so getting the A HREF tags and IMG SRC tags correct for the referenced passage text and photos, and also to relevant bits of logbook, is unnecessarily convoluted.
Martin added image-adding to the HTML editor which helps a lot.
We have an increasing number of "pending" caves where we have some data about them, but nobody has written even the beginnings of a cave description file. See the pending list from the most recent import in caves.
The input parser 'drawings' does some very-limited checks on Tunnel file references to survex files. Problems are reported in DataIssues and links to wallets in expo.survex.com/dwgfiles/. We have the beginnings of a parser to do the same for Therion drawings. These are currently no better than proof-of-concept.
We could do with reports that identify survex files with no equivalent logbook entry (which would show up all the ARGE surveys, but we could filter those out).
We could do with finding drawn-up Tunnel and Therion files with no documented link to their source data in survex files.
We have an imperfect report therionissues on references within in Therion files
We could do with better reports on survex files which have neved been drawn up ("tunnelled").
Managing people's names
As of 2022, there are 15 people troggle can't cope with at all because their name is not structured as "Forename Surname": where it is only two words and each begins with a capital letter (with no other punctuation, capital letters or other names or initials). See the design document handling people's names properly.
[Much better now that an external script has been incorporated into troggle and a form has been created for editing wallet data in July 2022. See 2019 Wallets]
We have all the wallets data for 1999 but it needs to be restructured into the modern file structure before we troggle can make sense of it. (We have scanned survey logbooks for 1989-1998 and a wallet structure could be restrospectively created with a bit of data archeology. If we did that, we could then apply the same data consistency tools. This is only likely to be worthwhile if we need to seriously return to 1623/161).
We have proof of concept for one of these: a simple list of expeditions. There are 61 different types of reports produced by troggle, some of which have complex internal structure which would need several API URLs. Nobody has attempted to produce any kind of alternative special-purpose front-end yet.
This is discussed on Element and there are extensive previous discussions on Slack.
There are a lot of things where the troggle software mostly works, but is clunky or confusing, and where the maintenance is difficult because several different epochs of software techniques are munged together. So this makes future enhancement slower and more difficult as historic special cases need to be re-done. This is technical debt.
We import cavers' names in three separate places: the folk list, the names of the surveyors in
survex files, and the names of expoers in the logbooks. These are inconsistently validated.
We have half a dozen different logbook HTML dialects for different eras of expo logbooks. These should be converted into one, simpler form. Then we can reduce the size of the logbooks parsers substantially. We now (March 2023) have just one logbook format in use, and the last old logbook has been converted (Sept.2023).
We have the HTML folder structure for complex caves in the same URL namespace as the cave descriptions generated by troggle. This is a fossil from when the cave descriptions were actual HTML files (generated offline) and needed to be local so that they would have simple access to the linked photographs and passage description pages. This prevents us using "EditThisPage" on those complex descriptions, which thus adds another level of pain on doing this task which is already much more difficult than it needs to be.
The possible alternative front-ends using the (in development) API would help. As would a complete alternative phone-centric design of the CSS and menuing system.
GPS is currently great for route-finding and locating entrances, but it is much less useful in those parts where there are steep cliffs as the height accuracy is dreadful. When GPS fixes the altitude accuracy, it will open a new range of uses.
There are some wonderful demos of fly-through surveys which use visual flow, phone ultrasonic range detection and/or lidar. These will be big files and unsuited to being stored in a git repository - as we currently do for Tunnel and Therion Drawings.
One-off attempts have been made at this for the past 15 years at least, e.g. the 3D view of the whole SMK ridge on the Areas page.
Historic QGIS work is archived at expofiles/qgis_resources expofiles\qgis_resources
CaveView does a wondeful job of animating 3D survex centre-lines, and when this is restored to use it will make a difference.
Julian has a high-quality lidar of the whole plateau area but it has not been as useful as we hoped.
Troggle is now sufficently portable that we can run the entire system on a laptop: it doesn't need internet access. We could design clever caching so that just an ordinary web browser could take effective a complete copy, but if universal internet access is coming anyway, any such work would be wasted.
Open architectural issues being worked on: