21st February 2024
I have gotten quite fond of Just lately much thanks to how it forces you into the habit of creating structured documentation for the various commands and scripts that you end up writing.
When adding a Justfile to a Python/Django project the other day I found myself in a situation where I wanted to make sure that all commands ran in a virtual environment. However, because Just run each line in a separate shell, it is not possible to activate the virtual environment in one line and then run the command in the next.
The only (sane) way I found to solve this was to prefix each command with the path to the virtual environment’s Python or Pip binary. This is not ideal, but it’s likley that you and your colaborators will have settled on a naming convention for the virtual environment directory anyway.
Here is a full example of a Justfile form one of my Django projects:
# load .env file
set dotenv-load
@_default:
just --list
# setup virtual environment, install dependencies, and run migrations
setup:
python3 -m venv .venv
./.venv/bin/pip install -r requirements.txt
./.venv/bin/python -Wa manage.py migrate
run:
./.venv/bin/python -Wa manage.py runserver
test:
./.venv/bin/python -Wa manage.py test
# virtual environment wrapper for manage.py
manage *COMMAND:
./.venv/bin/python manage.py
15th February 2024
Rencently VS Code and VS Codium has been throwing the following error at me when working with Snowman projects:
Visual Studio Code is unable to watch for file changes in this large workspace
Turns out that VS Code is trying to watch all the files in the .snowman
directory and it’s subdirectories. No wonder it’s complaining, there are a lot of files in there!
Adding .snowman
to the files.watcherExclude
setting in the VS Code settings solved the issue accross all my Snowman workspaces.
Now if do want to watch the .snowman
directory for changes one thing you can do is to try to reduce the number of files in there by deleting old cache data with the following Snowman command:
snomwan cache --invalidate
This can be good practice to do anyway every now and then to keep folder from growing larger and larger.
14th February 2024
I have previously written about how to build Snowman sites on Github Actions. Yesterday I had to figure out not only how to build Snowman sites on Gitlab Pages but also how to deploy them. Not only was the Gitlab CI/CD configuration a joy compared to Github Actions, but it integrats well with the Gitlab Pages service to the extent that any Snowman site should be able to build and deploy with the following .gitlab-ci.yml
configuration:
# The Docker image that will be used to build your app
image: debian:bookworm
# Functions that should be executed before the build script is run
before_script:
- apt-get update && apt-get install --yes ca-certificates && apt-get install --yes wget
- wget
"https://github.com/glaciers-in-archives/snowman/releases/download/0.5.0/snowman-linux-amd64"
-O snowman && chmod +x snowman
pages:
script:
- ./snowman build
publish: site
artifacts:
paths:
# The folder that contains the files to be exposed at the Page URL
- site
rules:
# This ensures that only pushes to the default branch will trigger
# a pages deploy
- if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
Now if you want to build it with local RDF files you would need to setup Oxigraph or another SPARQL service just like in the Github Actions example. I haven’t needed that yet so I leave that exercise to the reader.
30th November 2023
Snowman is a static site generator for SPARQL backends, since its inception a goal has been that one should be able to use it to build large sites with 100,000 pages. Oneway Snowman makes this possible by relying heavily on the caching of all SPARQL queries.
Building the Govdirectory website from a blank cache would issue thousands of SPARQL queries to the Wikidata Query Service. This, however, rarely happens since Snowman’s built-in cache “manager” allows one to selectively invalidate parts of the cache. Let’s see how one would use this feature to update parts of the Govdirectory website.
Basic real-world examples
Remove all top-level country data
snowman cache countries.rq --invalidate
The above invalidates the cache for the countries.rq
query.
Remove all account data for Icelands Ministry for Foreign Affairs
snowman cache account-data.rq Q15983772 --invalidate
The above invalidates the cache for the instance of the account-data.rq
query which was called with Q15983772
as its only argument.
An advanced real-world example
Now, what if you want to update all account data for all Icelandic government agencies? Because the account-data.rq
is no different between countries you can’t only rely only on Snowman’s cache invalidation. Instead, we need to involve some scripting.
Update all account data for all Icelandic government agencies
#/bin/sh
for i in $(find site/iceland/* -type d);
do
qid=$(echo ${i%%/} | cut -f3 -d"/");
echo $qid
snowman cache account-data.rq $qid --invalidate
done
The above script takes advantage of the fact that Govdirectory uses the identifiers from Wikidata to both build its output URIs and parameterize its SPARQL queries. The script iterates over all directories in the site/iceland/
directory(site
being the directory to which Snowman writes its output) and extracts the Wikidata identifier from the directory names. It then invalidates the cache for the account-data.rq
query for each of the directories.
Conclusion
Behind the scenes Snowman’s cache manager will first hash the query file name and subsequently the issued query. Thus, a hierarchy of directories is created where the first level is the hash of the query file name and the second level is the hash of the issued query. This is what enables Snowman’s support for selectively invalidating the cache.
In the cache Snowman stores the raw SPARQL resultsets in JSON and the cache
command allows one to inspect the cache. For example, to see the cache for the account-data.rq
query for the Icelandic Ministry for Foreign Affairs one would run:
snowman cache account-data.rq Q15983772
When planning to build a large site with Snowman I would recommend that you first put time into thinking how easy your information/data model is to query. That can be tricky with a project utilising open models such as the one of Wikidata and Wikibase, but Snowman
Do you have suggestions for how Snowman could improve its support for large sites? Check out the dedicated large-project-support issue tag on Github!