21st November 2023
Recently I have been ranting a little bit about the many different solutions for setting up MediaWiki development environments. A visit to mediawiki.org and you will likely find solutions based on Docker, Vagrant, and custom CLI tools. Some are maintained, some are usable on some particular Linux distros, etc.
However, all you need for the vast majority of MediaWiki development is PHP and SQLite.
MediaWiki has limited SQLite support according to mediawiki.org, but I have found that it works for the vast majority of cases, and incompabilities are tracked on Phabricator.
On Fedora, I get all the requirements for running MediaWiki with dnf install php php-pdo
. Then I run php -S localhost:8080
in the root of the MediaWiki repository and I’m good to go.
A downside is that I need to setup OpenSearch or Elasticsearch once in a while for tasks requiring CirrusSearch but that is a price I’m happy to pay for a stable and lightweight development environment.
15th November 2023
Snowman is a static site generator for SPARQL backends. HTML templates and SPARQL queries in, a website out.
I have a set of Snowman sites that needs to be built and deployed once a day to ensure that they are up to date. I wanted to do this a while back for one of them using Github actions.
The following Github action will:
- Checkout the repository
- Download the Snowman binary and make it executable
- Run the Snowman
build
command
name: build-and-deploy
on: [push]
jobs:
build-and-deploy:
runs-on: ubuntu-latest
env:
SNOWMAN_BINARY: https://github.com/glaciers-in-archives/snowman/releases/download/0.5.0/snowman-linux-amd64
steps:
- uses: actions/checkout@v3
- name: Download Snowman
run: wget "$" -O snowman && chmod +x snowman
- name: Run SPARQL server and build site
run: ./snowman build
# additional steps for deploying the contents of "site" directory
That’s it, now you would have additional steps for deploying the contents of the site
directory for the host of your choice.
Well, if you like me like having small sites that hold their data in one or a set of RDF files you won’t have a SPARQL endpoint to query. The solution? Run the Oxigraph database in the same Github action!
In addition to the above the following Github action will:
- Download the Oxigraph binary and make it executable
- Load the RDF data into the Oxigraph database
- Run Oxigraph and wait for it to start before running Snowman
name: build-and-deploy
on: [push]
jobs:
build-and-deploy:
runs-on: ubuntu-latest
env:
OXIGRAPH_BINARY: https://github.com/oxigraph/oxigraph/releases/download/v0.3.16/oxigraph_server_v0.3.16_x86_64_linux_gnu
SNOWMAN_BINARY: https://github.com/glaciers-in-archives/snowman/releases/download/0.5.0/snowman-linux-amd64
steps:
- uses: actions/checkout@v3
- name: Download Oxigraph
run: wget "$" -O oxigraph && chmod +x oxigraph
- name: Download Snowman
run: wget "$" -O snowman && chmod +x snowman
- name: Load RDF
run: ./oxigraph load --file static/data.ttl --location datastore
- name: Run SPARQL server and build site
run: ./oxigraph serve --location datastore & sleep 4 && ./snowman build
The sleep 4
is there to give Oxigraph some time to start before running Snowman. It’s not a very elegant solution and it would be awesome if someone(i.e. me) could make a service container for Oxigraph.
Still looking for a full real-world example? Check out the Github action used to deploy FornPunkts Open Data site from a single DCAT RDF file.
17th July 2023
A while back I made a goal along the line of “make all the data on fornpunkt.se available as an RSS feed”. One might ask why, well, I think that one shouldn’t be required to use the FornPunkt website to access and reuse its content. I also think that RSS is a great format for this given that most content has a temporal component to it and that RSS has many great clients, integrations, and extensions.
- All posts? A GeoRSS feed.
- All posts with a given tag? A GeoRSS feed.
- All posts by a given user? A GeoRSS feed, optionally with an access token.
- All tags? An RSS feed.
- Comments? An RSS feed.
- All comments on a given post? An RSS feed.
- All comments by a given user? An RSS feed, optionally with an access token.
- Comments classified as damage reposts? An RSS feed.
- Annotations? An RSS feed.
And so on. Some of these are more useful than others. The GeoRSS ones appear to be rather popular while the “Comments on a given post” is not very much used. Ii wasn’t much overhead to add these feeds as I already had two base classes for RSS and GeoRSS feeds in the core Django application.
In the end not only do users end up with the option to use one of many RSS clients, but there is also an extra set of APIs that might be more accessible than many of the other APIs given that RSS is a well-known format and that it is easy to discover. Will I keep to this goal? Will I expand it to other sites? I don’t know, but given the low overhead in this case I do not yet regret it.
12th May 2023
You get home late and go to bed only to find that you missed the New York Times’ daily sudoku puzzle? Maybe you skip the puzzle one day? Or maybe you just want to play it in an app of your choise.
I just wrote a basic script, which uses Regular Expressions to extract the puzzle from the New York Times website. I’m particularly fond of how it parses the JavaScript game data as JSON, who knows how long it will last. Combined with a sheduled Github Action’s workflow and a little bit of Git scraping it should pull the hard pussle each day and store it in a .sdk file.
The script is available on Github and hopefully it will build up a nice Sudoku archive over time. Now I just need to make it easier to load custom games into GNOME Sudoku.