7th November 2022
I’m learning how to create responsive Gnome applications with Python, GTK4, and Flatpak. One of the early issues I ran into after I generated a new Python project using Gnome Builder’s built in template was how to make Python dependencies from the Python package index aviable to my app.
I’m currently lacking a good resource describing the solution for my development log, so here is my take.
If you generated your project using Gnome Builder, you likley have a JSON file in the root of your project directory it bears the name of your application identifier. This file, it turns out, is called a Flatpak mainfest.
It contains a section called “modules” which in my case is an array containing a single object(module for the application itself). This array, however, does not only take objects. It also takes strings/paths to other manifests.
For each of one’s PIP dependecies we need a module definations. Once generated, module sections might remind you of lock-files. It would be cubersome to create these “manually” or induvidually. So there is a “Flatpak PIP Generator” script that among, other things, that can generate these module definations given a
flatpak-pip-generator script in the root of your project(you might want to add it to your
.gitignore). Then create a
requirements.txt file containing all your direct dependecies and then run:
./flatpak-pip-generator -r requirements.txt -o python-deps
The above should have generated a Flatpak manifest file named
python-deps.json in the root of your project, containing one module defination for each dependency.
In the “modules” section in the initial Flatpak maifest one can then add a new entry
python-deps.json. Now your app should build and run with all the dependecies avaible. If it dosn’t run make sure you clean and rebuild the project as the module configs might be cached.
Here are some of the resources that I used while tinkering with the above: Flatpak documentation, a StackOverflow question and a Gnome Discourse question.
16th October 2022
User Agent spoofing isn’t news and is necessary for many Internet users. Today, however, I noticed something I hadn’t earlier. User Agent spoofing causing analytics and security services to report the wrong operating system and browser altogether.
I logged into Twitter from GNOME Web on a PostmarketOS device, and the login notice I received a while later told me I had logged in from an Android device using Google Chrome.
Mozilla/5.0 (Linux;Android 10; Pixel) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.96 Mobile Safari/537.36
It turns out that GNOME Web doesn’t expose Linux distro or what browser I’m using by default. That’s all good, as it reduces the ability for webpages to successfully sniff user agents in the first place. Tor Browser and others uses this approach for improved privacy.
I can’t resist wondering to what extent this cements the notion of Chrome and Edge having all of the browser market. Possibly not so much as my case is a very uncommon one. It’s still interesting to think about.
13th April 2022
Since the first Swedish edition of Wiki Loves Monuments in 2011, participants have uploaded almost 30 000 images of heritage sites, protected buildings, ships, and working life museums.
Wiki Loves Monuments (WLM) goes on for 30 days per year. While many experienced users take images all year round for the event, many new contributors are introduced to WLM and the broader Wikimedia community for the first time through the 30-day event. What if there was a just as engaging effort for documenting heritage environments that would go on not for 30 days a year, but for 365?
Wiki Loves Monuments builds to an extent upon a Wikimedia Commons called Campaigns. Campaigns make it possible to construct upload forms with a set of predefined values or have default values passed through Campaigns URLs.
Much of the Wiki Loves Monuments tooling (maps, lists, etc) uses WLM specific Campaigns to define values for things like the identifiers of monuments and sites. It’s super easy to create these links:
However, it’s not that easy to reuse these Campaigns for non-WLM usage. The help texts are WLM specific, they set WLM-specific categories, etc. Nor is it easy to create a new Campaign. You will need special user rights and experience with JSON and Wikitext.
So rather than to set up tool-specific Campaigns, what if we had generic ones that any tool could integrate without needing to worry about set up and help texts?
Such Campaigns now exist thanks to Wikimedia Sverige’s community support! It doesn’t matter if you want to crowdsource images from a spreadsheet or if you are a programmer wanting to integrate uploading into your tool or app. You can use these Campaigns.
Two tools already utilizing these Campaigns are Kyrksök.se (database of churches in Sweden) and FornPunkt.se (FornPunkt is a citizen-science platform for historic sites).
You can create upload links for these Campaigns similarly to how one does it for WLM:
The 30 000 images contributed through Wiki Loves Monuments add to the tens of thousands of images uploaded by individual contributors and cultural heritage institutions. Today Wikimedia Commons has the largest public collection of images depicting Swedish heritage sites. Wikimedia Commons is, therefore, an important and open piece of infrastructure for cultural heritage in Sweden. It’s utilized by various organizations, including the Swedish National Heritage Board and its website “Kringla” which indexes Wikimedia Commons once every week.
By creating these generic and easy-to-use Campaigns, the hope is to lower the barrier for integrations to contribute to this public collection of information. Your aim does not need to be to create the next Wiki Loves Monuments.
28th March 2022
Isn’t it great when a citation tool like Zotero or Wikipedia’s Citoid takes a link and turns it into a citation? In this post, I show some of the HTML markup needed for your web pages to support just that.
How Zotero and Citoid works
Now you could write a Zotero-translations file for your site and submit it. There are already over 600 ones that could serve as examples. That might be a good way forward if you don’t have control over your website’s HTML markup.
However, one of those Zotero-translators files, “Embedded Metadata.js” happens to be a generic one that will try to extract data from your site if it does not have its own translator.
That translator is very capable and supports both common and generic metadata ontologies such as Open graph and ones one mostly find in industry-specific settings like BibO and Eprint Terms.
The tags and attributes your page needs
These tags are the ones I have ended up using as they are rather generic and serve many use-cases beyond citation.
<link rel="canonical" href="https://byabbe.se/example-page">
Always include a canonical tag! That ensures that the link used is your canonical link and not the one copied or visited by the user.
<meta property="og:title" content="Not your page title">
The title of the work or article is not necessarily the same as the page title, as the latter can contain things like the site name.
<meta name="author" content="Albin Larsson">
The name of the author.
<meta property="og:site_name" content="Site name">
The name of the site.
<meta property="og:article:published_time" content="2022-02-03T00:00:00+00:00">
Time of publication, note that this is from the article namespace in the Open Graph vocabulary. Let me know if you find a more generic property that is as widely supported!
Language of the page.
<meta name="description" content="An actual description.">
A description of the work, this isn’t commonly used for display purposes but some tools/setups still use it for indexing, etc.
Do you have other markup you use to expose data to citation tools? Let me know!