CUCC Expedition Handbook

Troggle Data Import

Troggle - Reset and import data

The python stand-alone script imports data from files into the troggle database (sqlite or MariaDB). It is separate from the process which runs troggle and serves the data as webpages (via apache), but it is plugged in to the same hierarchy of django python files.

In the :troggle: directory:

$ python

Usage is 'python  [runlabel]'
             where command is:
             test      - testing... imports people and prints profile. Deletes nothing.
             profile   - print the profile from previous runs. Import nothing.
                        -  del       - deletes last entry
                        -  delfirst  - deletes first entry

             reset     - normal usage: clear database and reread everything from files.

             init      - initialisation. Automatic if you run reset.
             caves     - read in the caves (must run first after initialisation)
             people    - read in the people from folk.csv (must run after 'caves')
             logbooks  - read in the logbooks
             QMs       - read in the QM csv files (older caves only)
             scans     - the survey scans in all the wallets (must run before survex)
             drawings  - read in the Tunnel & Therion files - which scans the survey scans too
             survex    - read in the survex files - all the survex blocks and entrances x/y/z

             dumplogbooks - Not used. write out autologbooks (not working?)

             and [runlabel] is an optional string identifying this run of the script
             in the stored profiling data 'import-profile.json'

             caves and logbooks must be run on an empty db before the others as they
             set up db tables used by the others.

On the main server you should only ever run the full reset option. The different stages are not cleanly separated now and apparently never have been. Later quick fixes to accomodate last-minute requests for wallets and survex files have further intertwined the precedence relationships.

On a clean computer using sqlite a complete import takes 100 seconds now if nothing else is running (200s if running on an SD card not a SSD). On the shared expo server it takes 600s. More than half of the time on the server is reinitialising the MariaDB database which is much, much slower than sqlite on a development machine.

These options exist so that when doing software development on your own troggle laptop, you can speed up debugging (e.g. survex stuff) by skipping the logbooks data entry.

Here is an example of the output after it runs, showing which options were used recently and how long each option took (in seconds).

*  importing troggle/
 * importing troggle/
 - settings on loading
 - settings on loading
 - Memory footprint before loading Django: 8.746 MB
*  importing troggle/
 - Memory footprint after loading Django: 31.863 MB
-- start    django.db.backends.sqlite3 troggle.sqlite
** Running job  3_16_2022 to troggle.sqlite
-- Initial memory in use 31.906 MB
Reinitialising db troggle.sqlite
 - deleting troggle.sqlite
 - Migrating: troggle.sqlite
No changes detected in app 'core'
Operations to perform:
  Apply all migrations: admin, auth, contenttypes, core, sessions
....much more output from all the import steps....

** Ended job   3_16_2022  -  89.7 seconds total.
     days ago   -312.64   -95.05   -95.04   -10.65     this
  runlabel (s)      7th      000   wsl2_2     3_05 3_16_2022
    reinit (s)      1.6      3.1      3.1      2.5      1.5    -40.2%
     caves (s)      7.2     12.5     12.5      7.3      6.8     -7.2%
    people (s)      9.8     11.6     11.7      9.5      9.1     -3.8%
  logbooks (s)     21.2     41.0     40.9     20.2     19.9     -1.5%
       QMs (s)      7.7     87.5     87.0      7.8      7.1     -8.6%
     scans (s)      1.7     19.9     20.2      2.0      1.7    -14.6%
    survex (s)     80.2    143.6     52.1     31.0     36.5     17.8%
  drawings (s)      6.0     13.5      8.9      5.2      6.9     33.8%
[This data is from March 2022 on an 11-year old PC: Win10, WSL1+Ubuntu20.04, Intel Core i7+2600K, solid-state hard drive.]

The last column shows the precentage chnage in the import runtime for each class of data. This varies quite a bit depending on what else is running on the computer and how much has been put in virtual memory and file caches by the operating ststem.

The file import_profile.json holds these historic times. Delete it to get a clean slate.

Logging Import Errors

Import glitches are documented on the Data Issues page. You should always check this after any import. (Don't worry about the xTherion is"Un-parsed image" messages, this is work in progress.)

There are detailed logs created in the troggle folder where you ran the import from:


Severe errors are also printed to the terminal where you are running the import, so watch this. It also prints the terminal the duration of each step and the memory in use while importing the survex files.

Return to: Troggle intro
Troggle index: Index of all troggle documents