Albin Larsson: Blog

Culture, Climate, and Code

Biocaching.com and PHP

3rd November 2016

The main Biocaching client biocaching.com is built with PHP and I was the one responsible for the decision to do it that way.

Although the Biocaching Platform is API first, biocaching.com is not all client side JavaScript. Development speed, easy accessibility, rendering speed and maintainability was all reasons to ditch the idea.

I considered both Go, Ruby and Node. Node had the advantage of being non-blocking by default. Rails had the advantage of development speed and of being Ruby(Ruby is used all over Biocaching). Go is just not there yet, wasn’t there a new package loading solution the other day?

PHP had the advantage of development speed and the fact that’s literary made for HTML rendering. No framework needed, the best documentation around and it’s easy for anyone to maintain.

Just yesterday I was trying to get the TTFB(time to first byte) decreased on profile pages over at biocaching.com, usually the TTFB is not a issue in most applications but the biocaching.com server application is an API first application making actual HTTP request to the Biocaching API...

If you are now thinking how stupid it is to use a blocking language such as PHP for such an application you should do a few Google searches and end up on Stack Overflow a few times.

Aka general development work-flow.

So the reason for the issue on biocaching.com was my logic for rendering all the follow buttons(there can be a lot of them).

screenshot

I started of with my calculator as usually when working on performance issues, did some analytics and decided on a solution that could reduce the TTFB up to 37%!

Let’s just say that my calculation sucked, I was at one point able to measure a improvement at 87%. It’s milliseconds but still magic.

If you would like to see more posts about the Biocaching Platform, the sometimes crazy technical solutions and the usage of some of the coolest open datasets around let me know!

Platsr API Sandbox

24th September 2016

Update Platsr.se does no longer exists.

Platsr.se if you aren’t familiar is a web community for gathering local stories and media around different locations. As a result it got some pretty incredible data, which users have put a lot of thought into, all available under open licenses.

Platsr also got a API and have been having one for ages, but almost no one has created upon it. If you take a look at the official documentation you won’t find that fact weird(even if you do read Swedish).

Only because you provide an open API you do not make it accessible. Actually, you can throw huge piles of money into developing an amazing API and developers will never end up using it anyway.

Creating Something Useful

I decided to rewrite the Platsr API documentation and create an API Sandbox for it.

When I set off to build the sandbox my feature specification was as basic as the following:

Rewriting all the documentation from scratch took a train trip between Oslo and Katrineholm(roughly five hours). Nothing the original developers could not have invested in from the beginning.

Building the sandbox on the other hand took a lot more time than it would have taken me to write the actual API.

It required a proxy server to avoid CORS and Mixed Content issues, because who uses your API clientside and prefers HTTPS? ;-)

In the end I ended up being relay happy about the end result, you can try the sandbox out and access the documentation right now. Please let me know what you think so I can make even better sandboxes and documentation next time!

Tip: When developing APIs write the documentation first then use it as a blueprint for the actual development.

Want to chat?

Would you like to talk to me about open data/data accessibility, hacks/projects, or anything else?

I will be attending both Hack4Heritage in Stockholm and Hack4NO in Hønefoss, make sure to catch me there!

Screenshot Screenshot

Some previous posts that might interest you:

Hack4FI and Wikidata

8th February 2016

Hack4FI logo

I got home late yesterday after an entire weekend at Hack4FI - Hack your heritage in Helsinki. This was a very different hackathon for me, because I was there as a part of Wikimedia Finlands Wikidata project and had no intention at all to work on any specific project. The only aim I had was to promote Wikidata/Wikimedia and help with actual technical things as much as I could.

I found myself in a total new position where I was the person people pointed at if anyone had questions and the person people from different institutions ask about Wikidata/Wikimedia. I tried my best answering everyone and helping people create integrations. I know that we got a few people working on bots and integrations and a few organizations/institutions to start using Wikidata. I’m hoping I can continue be a support for both the organizations and individual developers, if I don’t know the answer to a question that just means that I will learn something new and that’s what I love!

I’m happy to see so many museums and other institutions opening up, many of them seams to have the process of copy and pasting values from a spreadsheet into Wikidata and Wikimedia Commons, aka there is a need for huge need for GLAM developers!

Ajapaik2Commons

Actually I did a bit of actual coding for a project. Ajapaik2Commons could be completed in just two hours and was a nice way of ending the weekend. Ajapaik2Commons is a tool for taking a rephotograph from Ajapaik.ee and publish it on Wikimedia Commons with all of its meta data. Ajapaik2Commons is based on the Mapillary2Commons tool made by André Costa and it uses the URL2Commons tool by Magnus Manske behind the scenes. It can be used by other tool passing a Ajapaik id through a URL parameter.

Currently it’s isn’t online, I’m going to publish it on the Tool Labs Server as soon as I got the time to look into how it works(got access almost direct after my request, amazing admins!), maybe I have the time on Wednesday!

Special thanks to Wikimedia Finland who made my participation possible! Keep on learning together everyone!

Digging Deeper with Heritage Data and a Geocoder

4th February 2016

This article describes a reverse geocoding experiment preformed on Swedish heritage data provided by the SOCH API. I’m using reverse geocoding and a bit of magic to be able to improve location-based search by being able to return items that isn’t georefrenced. This practice can be applied on almost any dataset.

A friend of mine pointed out that this could be achieved by selecting location names using a usual bounding box. The reason I decided to go with reverse geocoding(expect the fact that’s more fun) is that it’s more flexible, you can use it along a path or just with a single point without building some rubbish estimated bounding box.

SOCH contains about 1.6 million georefrenced objects, all accessible from a map, but if you are doing research on a location you will need more than just the georefrenced objects. You will probably end up making a few searches(both free text and location text) on multiply place names in addition to your bounding box search.

All those searches can be automated from the bounding box search.

For my setup I’m using the open source geocoder Pelias, with place name data from OpenStreetMap(pipeline) and Geonames(pipeline)(adding your own data is not a major task either(pelias-model)).

You can use any geocoder with useful data(such as Googles or Mapzens public APIs) but the results will differ.

I will make I urban and one non urban search, the non urban search will happen around Flodafors in Sörmland Sweden(59.067,16.359) near the known church Floda kyrka, the urban search will happen around the street Vallgatan in Nyköping Sweden(58.74,17.01) next to the former castle Nyköpingshus.

I have set the geocoder to always give me the ten closest place names, those will then be used in the actual text based searches.

The non urban text based searches resulted in a total of 1724 unique results, the free-text searches resulted in 1330 results, the text location ones 808 results. 214 results was retrieved both from the free-text searches and the text-location ones.

Only about 40% of the unique results was relevant for the requested location, I expected the useful results to be even fewer so even if 40% seams like a bad result it’s actually a good one. It’s quite easy to sort out a lot of irrelevant result by searching the objects content for other regions names. If one object contains the name of any region other then the one where the location is located it can be removed from the final results.

The bounding box for a area covering all the places with the used names returned 32 results.

statics statics

The urban area did cause some trouble, as about half of the place names used ended p being useless store names. One of the used place names was the name of the local museum, Sörmlands Museum even more sadly this did not cause any bad results because this is one museum witch has its collections content behind pay-walls and not available through the SOCH API.

There where 1640 results unique this time, almost 100 results less then for the non-urban example, this is because store names and boring museums do not provide any results. The text-location searches returned 256 results, only about 20% was useful in this case, mostly because of the fuzzy search behind SOCH(one of the stores contained the name of a church in some foreign area). The free-text searches returned 1384 object in witch about 80% was useful! I came to the conclusion that this was because the name of the former castle Nyköpingshus provided many results and did not share the name with any other location(witch was a issue in the non-urban location).

The relevant bounding box for the urban area returned 13 results.

statics statics

It should be noted that the difference in results between the non-urban and the urban area is affected by the fact that the area covered is much larger for the non-urban area as a result of the density of place names.

Although using a reverse geocoder often gave more irrelevant results than relevant ones this is definitely an improved way for searching, manual searches would always give the same irrelevance and much of the irrelevant objects can be sorted out with simple techniques. Using this and a few other tricks has really help me when I have been looking of content deep into the oceans of data.

I should also note that each location query ended up with eleven HTTP requests to the Kringla.nu MediaRSS interface(cheaty way of using the SOCH API), but if such a search feature would be added to the actual API it would not be super heavy(at least not for you, the client).

I’m heading to Helsinki tomorrow for Hack4FI - hack your heritage, as a part of Wikimedia Finlands Wikidata project, and you should defiantly catch me if you want to chat about linked data or something else!

Older PostsNewer Posts