Troggle is the software which runs the the expo cave survey data management and website. It is not the only thing running on the expo server.
This is the folder structure of the repo :expoweb: which is also the root of the website. Note that the webserver (apache) presents many more apparent folders, such as expofiles, than there are folders because that's what it and troggle do.
In the list below only the handbook folder has been expanded. The years folder includes
42 subfolders from 1976 to 2019.
The handbook illustrates the i/t/l idiom whereby an image file (i) is displayed with a paragraph of text as an HTML file (l) and there is a thumbnail image (t) which is included in another document, e.g. see the photographic guide to the walk from the toll road car park.
See the live report on which urls resolve to which actual folders at pathsreport.
The server configuration scripts are in the file troggle/debian/serversetup and are also documented with notes in troggle/README.txt. It is intended that the full documentation will be copied here in due course.
It is hoped that we will develop fully automated server setup scripts (such as are used by CUYC for their Django system)
Apache needs to run as user 'expo', not 'www-data' as standard. This is due to a basic incompatibility in permissions between apache and git: git does not honour existing permissions exactly. See How to run apache as an alternate user.
Although troggle will appear to work with sqlite database, it needs a proper concurrrent access database to manage multiple users. sqlite is single-user (effectively a separate instance of django is created for each page access, so even one person looking at several pages at once is "multi-user").
The folder structure on the server is as shown below. It is all
in the user folder for the user expo i.e. in
expofiles contains ~40GB of files which are published by the webserver but which are not parsed by troggle. 28GB of these are photographs in /expofiles/photos/ and there are over 4GB of scanned images of surveys in /expofiles/surveyscans/. There is a cleaned, complete copy of the documentation for the tunnelX cave plan drawing package in /expofiles/tunnelwiki/.
Presumably these are used by something else hosted on the server ? Anyway, if you are setting up a new troggle sever you don't need them.
Installed independently of troggle simply with apt install xapian-omega and
then configured into the troggle-generated menus in css/main2.css.
You can see it at the bottom of the top-left menu on this page and on nearly all pages of the handbook.
The function is
connected with an apache configuration
ScriptAlias /search /usr/lib/cgi-bin/omega/omega in ~expo/config/apache/expo.conf.
Installed by Wookey in May 2020.
This is installed on the server and accessed at /kanboard It is an open source equivalent of the Trello kanban card task planning system. The 2022 expo uses Trello itself (separate login required) but we intend to move to our own kanboard from 2023.
This is a perl script, and served by the webserver using the url apache configuration
#bank of expo #current expedition ScriptAlias /boe /home/expo/boe/boc/boc.pl <Directory /home/expo/boe/boc> AddHandler cgi-script .pl SetHandler cgi-script Options +ExecCGI Require all granted </Directory>
Handbook documentation for its use is at The Bank of Expo.
This is a compiled executable written in C which, like boe, is installed as an Apache CGI redirection. The installation instructions are at https://git.zx2c4.com/cgit/tree/README but we use the Debian package https://packages.debian.org/stable/cgit.
This is currently disabled in Feb. 2022. If you need anything that would be done frequently (e.g. bins) you currently have to run it manually.
The server runs it's hourly, daily and weekly scripts using the anacron system. In ~expo/config/cron/ on the server, there are expo.hourly and expo.daily scripts and these are (or should be) launched at the appropriate times by the server root from /etc/crontab. This is not obviously working on the server at present.
To do this, run
$ python manage.py runserver 8000 -v 3
from the troggle directory. This runs it on port 8000 so you see the website
gunicorn also works. This runs with 9 workers (suitable for a 4-core processor,
-w takes n+1 where n is the number of cores of your processor):
$ gunicorn --reload -w 9 -b :8000 wsgi