Skip to content
← Back to all posts

The unexpected side-benefits of implementing standards

19 December 2014

Written by 


A photograph of Ross Singer

Ross Singer


While the urge to create your own, fit exactly for your purpose, API may seem appealing, don’t disregard the idea of implementing around standards from the beginning: you might be surprised at the benefits they bring in places you didn’t expect.


Standards are generally (and often justifiably) much maligned in the software development world (I’ll wait here while you cue up xkcd 927 to paste in the comments); with the usual argument being that rolling your own request/response interface is more flexible and less constraining. And that is probably true. But building your own, proprietary API comes with its own set of issues that are often ignored or overlooked at first:

  1. Designing a flexible/extensible API
  2. Managing changes and versioning
  3. Maintaining client libraries and developer documentation

It adds up, and at some point you have to ask how much time and effort you want to invest in it.

In our Digitised Content application, we decided early on to base the request API on the library standard, OpenURL (NISO Z39.88). The rationale being that we wanted customers to be able to generate requests from within their own apps and there was a high probability that those apps would have the capability to create OpenURLs already. Even if that capability wasn’t there, there were at least decent odds that a developer in the library community was at least somewhat familiar with the standard. Since its main use case is to make known-item requests for bibliographic resources, and our product was a system to request digitisation of parts of books or articles, it seemed like we could at least use it as a starting point.

That said, it wasn’t an easy sell, because OpenURL makes for a really ugly API.

  • It’s kind of RPC-ish
  • Query strings are long, with a lot of boilerplate
  • There is a non-obvious (and largely unused) entity model that is defined in arcane query parameter prefixes
  • There are subtle, under-documented semantic differences between prefixes ending in “.” and “_”

And really this is just the tip of the iceberg for why a developer would immediately begin to drop this awkward and clumsy seeming standard and go their own way.

However, as we started using it, we realized how well it actually fit our request data. What we would need from any API is metadata about:

  • The thing you want digitised
  • The course you want it digitised for
  • The person requesting the copy
  • The place the request came from

We could certainly cobble all of these things together in any manner of ways, but OpenURL’s aforementioned non-obvious entity model consists of:

  • The referent (i.e. the thing you are enquiring about)
  • The referring entity (the context in which the referent appears)
  • The requester
  • The referrer

Holy smokes, we found ourselves a request API! To boot, there’s also the service entity where we could shoehorn any arguments we wanted to pass to the service.

While the library world completely ignores everything but the referent and the referrer, the tooling is still in place for the rest of them, so that is all just built in.

Another benefit of starting off with OpenURL support, is that when we eventually needed to grab data from the library’s systems, we didn’t have to write anything new to make OpenURL requests.

But the serendipitous side-effects of sticking with standards didn’t end there: when we were tasked with integrating our applications with virtual learning environments (such as Blackboard and Moodle), it was recommended by one of our customers that I look at IMS LTI, which I (admittedly), took one look at, ignored and promptly rolled my own API for VLE integration.

Which after our first release, I immediately regretted since it quickly became apparent how much tooling was going to be required in a lot of different places and how many variables might be present at the local institutions.

A year later, when we needed to investigate integrating another app into VLEs, we revisited LTI and, as a proof of concept, integrated it alongside our original API.

Immediately, we were able to accept requests from any VLE, with very little new development required on our end. Further, we now had a much more robust interface for our products to work with one another internally (which, in turn, paved the way for our newer Talis Lighthouse project to seamlessly fit in). The app we initially did our proof of concept with doesn’t actually translate well to stock LTI workflows, but that didn’t really matter: by basing the interface on LTI, we were/are able to leverage the existing functionality within the VLE frameworks. That means less code we have to write and fewer variables we have to contend with on the customer’s end. And while it may not be the most ideal user experience as a native LTI application, it works, so customers don’t have to wait for a custom module to use our app.

As we overhaul our reading list application to use the LTI interface, this will, in turn allow us to embed any LTI-enabled resource into resource lists. Again, this wasn’t something we set out to build or support, but we get that for free, just by using standards for our internal needs.

Certainly standards aren’t a magic bullet and they do incur their own sets of costs: they constrain your initial design and, since standards tend to be overengineered to handle many use cases, can provide a steep learning curve to master and support. However, when you are able to consume and be consumed by a variety of external resources with little or no extra development needed, the effort pays itself off quickly.

Comments on HN