The website is now large and complicated with a lot of (too many!) moving parts. This handbook section contains info at various levels: simple 'Howto add stuff' information for the typical expoer, more detailed info for cloning it onto your own machine for more significant edits, and structural info on how it's all put together for people who want/need to change things. [This page is now so big that it needs to be split up.]
Simple instructions for updating the website (on the expo machine).
You can update the site via the troggle pages, by editing pages online via a browser ("Edit this page" on the menu on the left), by editing them locally on disk, or by checking out the relevant part to your computer and editing it there. Which is best depends on your knowledge and what you want to do. For simple addition of cave or survey data troggle is recommended. For other edits it's best if you can edit the files directly rather than using the 'edit this page' button, but that means you either need to be on expo with the expo computer, or be able to check out a local copy. If neither of these apply then using the 'edit this page' button is fine.
It's important to understand that everything on the site (except 'expofiles') is stored in a distributed version control system (DVCS) (called 'Mercurial' and accessed by most people using software called 'TortoiseHg'), which means that every edited file needs to be 'checked in' at some point. The Expo website manual goes into more detail about this, below. This stops us losing data and makes it very hard for you to screw anything up permanently, so don't worry about making changes - they can always be reverted if there is a problem. It also means that several people can work on the site on different computers at once and normally merge their changes easily.
Increasing amounts of the site are autogenerated, not just files, so you have to edit the base data, not the generated file. All autogenerated files say 'This file is autogenerated - do not edit' at the top - so check for that before wasting time on changes that will just be overwritten
Editing the expo website is an adventure. Until now, there was no guide which explains the whole thing as a functioning system. Learning it by trial and error is non-trivial. There are lots of things we could improve about the system, and anyone with some computer nous is very welcome to muck in. It is slowly getting better organised.
This manual is organized in a how-to sort of style. The categories, rather than referring to specific elements of the website, refer to processes that a maintainer would want to do.
Use these credentials for access to the site. The user is 'expo', with a cavey:beery password. Ask someone if this isn't enough clue for you. This password is important for security. The whole site will get hacked by spammers or worse if you are not careful with it. Use a secure method for passing it on to others that need to know (i.e not unencrypted email), don't publish it anywhere, don't check it in to the website by accident. A lot of people use it and changing it is a pain for everyone so do take a bit of care.
Note that you don't need a password to view most things, but you will need one to change them
All the expo data is contained in 4 Mercurial repositories at expo.survex.com. This is currently hosted on a server at the university. Mercurial* is a distributed version control system which allows collaborative editing and keeps track of all changes so we can roll back and have branches if needed.
The site has been split into four parts:
All the scans, photos, presentations, fat documents and videos are stored just as files (not in version control) in 'expofiles'. See below for details on that.
Part of the website is static HTML, but quite a lot is generated by scripts. So anything you check in which affects cave data or descriptions won't appear on the site until the website update scripts are run. This happens automatically every 30 mins, but you can also kick off a manual update. See 'The expoweb-update script' below for details.
Also note that the website you see is its own Mercurial checkout (just like your local one) so that has to be 'pulled' from the server before your changes are reflected.
If you know what you are doing here is the basic info on what's where:
(if you don't know what you're doing, skip to Editing the website below.)
Photos, scans (logbooks, drawn-up cave segments) (This was about 60GB of stuff in 2017 which you probably don't actually need locally) To sync the files from the server to local expofiles directory:
rsync -av email@example.com:expofiles /home/expo
To sync the local expofiles directory back to the server:
rsync -av /home/expo/expofiles firstname.lastname@example.org:
(do be careful not to delete piles of stuff then rsync back - as it'll all get deleted on the server too, and we may not have backups!). Use rsync --dry-run --delete-after -a to check what would be deleted. If you are using rsync from a Windows machine you will not get all the files as some filenames are incompatible with Windows: see more detail under Using Mercurial/TortoiseHg in Windows below.
(We have an issue with rsync not using the appropriate user:group attributes for files pushed back to the server. This may not cause any problems, but watch out for it.)
To edit the website fully, you need a Mercurial client such as TortoiseHg. Some (static text) pages can be edited directly on-line using the 'edit this page link' which you'll see if you are logged into troggle. In general dynamically-generated pages can not be edited in this way, but forms are provided for some page-types like 'caves'.
What follows is for Linux. If you are running Windows then see below Using Mercurial/TortoiseHg in Windows.
Mercurial can be used from the command line, but if you prefer a GUI, TourtoiseHg is highly recommended on all OSes.
Linux: Install mercurial and tortoisehg-nautilus from synaptic, then restart nautilus nautilus -q. If it works, you'll be able to see the menus of tortoise within your Nautilus windows.
Once you've downloaded and installed a client, the first step is to create what is called a checkout of the website. This creates a copy on your machine which you can edit to your heart's content. The command to initially check out ('clone') the entire expo website is:
hg clone ssh://email@example.com/expoweb
for subsequent updates
will generally do the trick.
In TortoiseHg, merely right-click on a folder you want to check out to, choose "Mercurial checkout," and enter
After you've made a change, commit it to you local copy with:
hg commit (you can specify filenames to be specific)
or right clicking on the folder and going to commit in TortoiseHg. Mercurial can't always work out who you are. If you see a message like "abort: no username supplied" it was probably not set up to deduce that from your environment. It's easiest to give it the info in a config file at ~/.hgrc (create it if it doesn't exist, or add these lines if it already does) containing something like
username = Firstname Lastname <firstname.lastname@example.org>
The commit has stored the changes in your local Mercurial DVCS, but it has not sent anything back to the server. To do that you need to:
Before pushing, you should do an hg pull to sync with upstream first. If someone else has edited the same files you may also need to do:
before pushing again
Simple changes to static files will take effect immediately, but changes to dynamically-generated files (cave descriptions, QM lists etc) will not take effect, until the server runs the expoweb-update script.
This edits the file served by the webserver (Apache) on expo.survex.com but it does not update the copy of the file in the repository in expo.survex.com. To properly finish the job you need to use putty to ssh into expo.survex.com and run "hg diff" (to check what changes are pending) and then "hg commit" in the directory /home/expo/expoweb .
Read the instructions for setting up TortoiseHG in Aled's Windows 101.
In Windows: install Mercurial and TortoiseHg of the relevant flavour from https://TortoiseHg.bitbucket.io/ (ignoring antivirus/Windows warnings). This will install a submenu in your Programs menu)
To start cloning a repository: first create the folders you need for the repositories you are going to use, e.g. D:\CUCC-Expo\loser and D:\CUCC-Expo\expoweb. Then start TortoiseHg Workbench from your Programs menu, click File -> Clone repository, a dialogue box will appear. In the Source box type
for expoweb (or similar for the other repositories). In the Destination box type whatever destination you want your local copies to live in on your laptop e.g. D:\CUCC-Expo\expoweb. Hit Clone, and it should hopefully prompt you for the usual beery password.
The first time you do this it will probably not work as it does not recognise the server. Fix this by running putty (downloading it from https://www.chiark.greenend.org.uk/~sgtatham/putty/), and connecting to the server 'email@example.com' (on port 22). Confirm that this is the right server. If you succeed in getting a shell prompt then ssh connection are working and TortoiseHg should be able to clone the repo, and send changes back.
The script at the heart of the website update mechanism is a makefile that runs the various generation scripts. It is run every 15 minutes as a cron job (at 0,15,30 and 45 past the hour), but if you want to force an update more quickly you can run it he
The scripts are generally under the 'noinfo' section of the site just because that has (had) some access control. This will get changed to something more sensible at some point
Cave description pages are automatically generated from a set of
cave files in noinfo/cave_data/ and noinfo/entrance_data/. These files
are named -
(If you remember something about CAVETAB2.CSV for editing caves, that was superseded in 2012).
Each year's expo has a documentation index which is in the folder
, so to checkout the 2011 page, for example, you would use
hg clone ssh://firstname.lastname@example.org/expoweb/years/2011
Once you have pushed your changes to the repository you need to update the server's local copies, by ssh into the server and running hg update in the expoweb folder.
Logbooks are typed up and put under the years/nnnn/ directory as 'logbook.html'.
Do whatever you like to try and represent the logbook in html. The only rigid structure is the markup to allow troggle to parse the files into 'trips':
<div class="tripdate" id="t2007-07-12B">2007-07-12</div> <div class="trippeople"><u>Jenny Black</u>, Olly Betts</div> <div class="triptitle">Top Camp - Setting up 76 bivi</div> <div class="timeug">T/U 10 mins</div>
Note that the ID's must be unique, so are generated from 't' plus the trip date plus a,b,c etc when there is more than one trip on a day.
Older logbooks (prior to 2007) were stored as logbook.txt with just a bit of consistent markup to allow troggle parsing.
The formatting was largely freeform, with a bit of markup ('===' around header, bars separating date,
So the format should be:
===2009-07-21|204 - Rigging entrance series| Becka Lawson, Emma Wilson, Jess Stirrups, Tony Rooke=== <Text of logbook entry> T/U: Jess 1 hr, Emma 0.5 hr
Photos are stored in the general file area of the site under http://expo.survex.com/expofiles/photos/
GPS tracks over the surface of the plateau (GPX files from your handheld GPS or phone) are stored in the general file area of the site under http://expo.survex.com/expofiles/gpslogs/
They are each organised by year, and by photographer (walker). Please use directory names like 2014/YourName (i.e no spaces, CamelCase for names).
They are viewed at http://expo.survex.com/photos/
Photos and GPS tracks can be uploaded in 2 basic ways:
See Photo/File Upload Instructions for using webdav/webfolders or winscp from your browser or with other tools, on various OSes.
To be written.
There is a table in the survey book which has a list of all the surveys and whether or not they have been drawn up, and some other info.
This is generated by the script tablizebyname-csv.pl from the input file Surveys.csv
The CUCC Website was originally created by Andy Waddington in the early 1990s and was hosted by Wookey. The VCS was CVS. The whole site was just static HTML, carefully designed to be RISCOS-compatible (hence the short 10-character filenames) as both Wadders and Wookey were RISCOS people then. Wadders wrote a huge amount of info collecting expo history, photos, cave data etc.
Martin Green added the SURVTAB.CSV file to contain tabulated data for many caves around 1999, and a script to generate the index pages from it. Dave Loeffler added scripts and programs to generate the prospecting maps in 2004. The server moved to Mark Shinwell's machine in the early 2000s, and the VCS was updated to subversion.
In 2006 Aaron Curtis decided that a more modern set of generated, database-based pages made sense, and so wrote Troggle. This uses Django to generate pages. This reads in all the logbooks and surveys and provides a nice way to access them, and enter new data. It was separate for a while until Martin Green added code to merge the old static pages and new troggle dynamic pages into the same site. Work on Troggle still continues sporadically.
After expo 2009 the VCS was updated to hg, because a DVCS makes a great deal of sense for expo (where it goes offline for a month or two and nearly all the year's edits happen).
The site was moved to Julian Todd's seagrass server (in 2010), but the change from a 32-bit to 64-bit machine broke the website autogeneration code, which was only fixed in early 2011, allowing the move to complete. The data has been split into 3 separate repositories: the website, troggle, the survey data, the tunnel data. Seagrass was turned off at the end of 2013, and the site has been hosted by Sam Wenham at the university since Feb 2014.
Ths section is entirely out of date (June 2014), and awaiting deletion or removal.
The way things normally work, python or perl scripts turn CSV input into HTML for the website. Note that:
The CSV files are actually tab-separated, not comma-separated despite the extension.
The scripts can be very picky and editing the CSVs with microsoft excel has broken them in the past- not sure if this is still the case.
Overview of the automagical scripts on the expo websiteScript location Input file Output file Purpose /svn/trunk/expoweb/noinfo/make-indxal4.pl /svn/trunk/expoweb/noinfo/CAVETAB2.CSV many produces all cave description pages /svn/trunk/expoweb/noinfo/make-folklist.py /svn/trunk/expoweb/noinfo/folk.csv http://cucc.survex.com/expo/folk/index.htm Table of all expo members /svn/trunk/surveys/tablize-csv.pl /svn/trunk/surveys/tablizebyname-csv.pl /svn/trunk/surveys/Surveys.csv http://expo.survex.com/expo/surveys/surveytable.html http://expo.survex.com/surveys/surtabnam.html Survey status page: "wall of shame" to keep track of who still needs to draw which surveys
This is likely to change with structural change to the site, with style changes which we expect to implement and with the method by which the info is actually stored and served up.
... and it's not written yet, either :-)
Mercurial is a distributed revision control system. On expo this means that many people can edit and merge their changes with the Mercurial server in the Tatty Hut even if there is no Internet access. Also anyone who is up to date with the Tatty Hut can take their laptop somewhere where there is Internet access and update expo.survex.com - which will then get all the updates done by everyone on expo.
In principle, survey notes can be typed into a laptop up on the plateau which is then synchronised with the Tatty Hut on returning to base.
Mercurial is inefficient for scanned survey notes, which are large files that do not get modified, so they are kept as a plain directory of files 'expofiles'.