Inside Acropolis

A guide to the Research & Education Space for contributors and developers

May 2016 Edition

Edited by Mo McRoberts, BBC Archive Development.


The Research & Education Space (RES) is a partnership project between Jisc, Learning on Screen, and the BBC that aims to make it easier for teachers, students and academics to discover, access and use material held in the public collections of broadcasters, museums, libraries, galleries and publishers.

Powering the Research & Education Space is Acropolis, an open source software stack which collects, indexes and organises rich structured data about those archive collections published as Linked Open Data (LOD) on the Web. The collected data is organised around the people, places, events, concepts and things related to the items in the archive collections. If the archive assets themselves are available in digital form, that data includes the information on how to access them, all in a consistent machine-readable form. The RES platform has a powerful API to ensure applications can make use of the index data, along with the source data, to make those collections accessible and meaningful.

This book describes how a collection-holder can publish their data in a form which can be collected and indexed by the platform and used by applications, and how an application developer can make use of the index and interpret the source data in order to present it to end-users in a useful fashion.

This book is also available in PDF format.

How to use this book

Inside Acropolis is aimed at two key audiences: collection holders (people and organisations who have data that they do or can publish as linked open data describing concepts, people, places, things, events or media), and product developers (people and organisations who want to build software that uses the Research & Education Space API).

Everyone should read the Introduction to the Research & Education Space platform. If you’re new to RDF or linked open data, you should also read the Linked Open Data: What is it, and how does it work? section.

Collection holders should read the Requirements for publishers section, which includes some high-level technical and editorial guidance and on how to produce data which is suitable for indexing by the Research & Education Space platform and used by applications developed using its API.

Product developers should read the Requirements for consuming applications section, which contains high-level technical and editorial guidance, as well as the The Research & Education Space API: the index and how it's structured section, which explains how to interact with the platform’s API specifically, as distinct from a generic linked open data service.

Finally, it is essential that everyone working with the Research & Education Space reads the Describing and consuming data section, which describes the specific classes and predicates that publishers are recommended to use in their data, and product developers should expect to see appearing in the data they consume.

An introduction to the Research & Education Space platform

The Research & Education Space is powered by an open source software stack named Acropolis, which is made of up three main components: a specialised web crawler, Anansi, an aggregator, Spindle, and a public API layer, Quilt. You can read more about the architecture of the stack in the section Under the hood: the architecture of Acropolis.

Anansi’s role is to crawl the web, retrieving permissively-licensed Linked Open Data, and passing it to the aggregator for processing.

Spindle, which is actually implemented as a plug-in module for our data workflow engine—Twine—examines the data, looking for instances where the same digital, physical or conceptual entity is described in more than one place, particularly where the data explicitly states the equivalence, and aggregates and stores that information in an index.

This subject-oriented index is the very heart of the Resarch & Education Space: by re-arranging published data so that it's organised around the entities described by it, instead of by publisher or data-set, applications are able to rapidly locate all of the information known about a particular entity because it’s collected together in one place.

Quilt is responsible for making the index available to applications, also by publishing it as Linked Open Data. Because the Research & Education Space maintains an index, rather than a complete copy of all data that it finds, applications must consume data both from the index and from the original data sources—and so the outputs from Quilt also conform to the publishing recommendations in this book.

An overview of the Research & Education Space

We will not be directly developing end-user applications as part of the Research & Education Space itself, although sample code and demonstrations will be published to assist software developers in doing so. There will be a “powered by RES” logo scheme rolled out during the course of the project.

The Research & Education Space only indexes and publishes data which has been released under terms which permit re-use in both commercial and non-commercial settings, so that all kinds of applications can be developed using the platform.

For the Resarch & Education Space to be most useful, holders of publicly-funded archive collections across the UK should publish Linked Open Data describing their collections (including related digital assets, where they exist). Although many collections are already doing so or plan to, the Research & Education Space project partners will be providing tools and advice to collection-holders in order to assist them throughout the lifetime of the project.

Linked Open Data: What is it, and how does it work?

Linked Open Data is a mechanism for publishing structured data on the Web about virtually anything, in a form which can be consistently retrieved and processed by software. The result is a world wide web of data which works in parallel to the web of documents our browsers usually access, transparently using the same protocols and infrastructure.

Where the ordinary web of documents is a means of publishing a page about something intended for a human being to understand, this web of data is a means of publishing data about those things.

Web addresses, URLs and URIs

Uniform Resource Locators (URLs), often known as Web addresses, are a way of unambiguously identifying something which is published electronically. Although there are a variety of kinds of URL, most that you day-to-day see begin with http or https: this is known as the scheme, and defines how the rest of the URL is structured—although most kinds of URL follow a common structure.

The scheme also indicates the communications protocol which should be used to access the resource identified by the URL: if it's http, then the resource is accessible using HTTP—the protocol used by web servers and browsers; if it's https, then it’s accessible using secure HTTP (i.e., HTTP with added encryption).

Following the scheme in a URL is the authority—the domain name of the web site: it’s called the authority because it identifies the entity responsible for defining the meaning and structure of the remainder of the URL. If the URL begins with, you know that it's defined and managed by the BBC; if it begins with, you know that it's managed by the BFI, and so on.

After the authority is an optional path (i.e., the location of the document within the context of the particular domain name or authority), and optional query parameters (beginning with a question-mark), and fragment (beginning with a hash-mark).

URLs serve a dual purpose: not only do they provide a name for something, but they also provide anything which understands them with the information they need to retrieve it. Provided your application is able to speak the HTTP protocol, it should in principle be able to retrieve anything using a http URL.

Universal Resource Indicators (URIs) are a superset of URLs, and are in effect a kind of universal identifier: their purpose is to name something, without necessarily indicating how to retrieve it. In fact, it may be that the thing named using a URI cannot possibly be retrieved using a piece of software and an Internet connection because it refers to an abstract concept or a physical object.

URIs follow the same structure as URLs, in that there is a scheme defining how the remainder is structured, and usually some kind of authority, but there are many different schemes, and many of them do not have any particular mechanism defined for how you might retrieve something named using that scheme.

For example, the tag: URI scheme provides a means for anybody to define a name for something in the form of a URI, using a domain name that they control as an authority, but without indicating any particular semantics about the thing being named.

Meanwhile, URIs which begin with urn: are actually part of one of a number of sub-schemes, many of which exist as a means of writing down some existing identifier about something in the form of a URI. For example, an ISBN can be written as a URI by prefixing it with urn:isbn: (for example, urn:isbn:9781899066100).

You might be forgiven for wondering why somebody might want to write an ISBN in the form of a URI, but in fact there are a few reasons. In most systems, ISBNs are effectively opaque alphanumeric strings: although there is usually some validation of the check digit upon data entry, once stored in a database, they are rarely interrogated for any particular meaning. Given this, ISBNs work perfectly well for identifying books for which ISBNs have been issued—but what if you want to store data about other kinds of things, too? Recognising that this was a particular need for retailers, a few years ago ISBNs were made into a subset of Global Trade Information Numbers (GTINs), the system used for barcoding products sold in shops.

By unifying ISBNs and GTINs, retailers were able to use the same field in their database systems for any type of product being sold, whether it was a book with an ISBN, or some other kind of product with a GTIN. All the while, the identifier remained essentially opaque: provided the string of digits and letters scanned by the bar-code reader could be matched to a row in a database, it doesn't matter precisely what those letters and numbers actually are.

Representing identifiers in the form of URIs can be thought of as another level of generalisation: it allows the development of systems where the underlying database doesn’t need to know nor care about the kind of identifier being stored, and so can store information about absolutely anything which can be identified by a URI. In many cases, this doesn’t represent a huge technological shift—those database systems already pay little attention to the structure of the identifier itself.

Hand-in-hand with this generalisation effect is the ability to disambiguate and harmonise without needing to coordinate a variety of different standards bodies across the world. Whereas the integration of ISBNs and GTINs took a particular concerted effort in order to achieve, the integration of ISBNs and URNs was only a matter of defining the URN scheme, because URIs are already designed to be open-ended and extensible.

Linked Open Data URIs are a subset of URIs which, again, begin with http: or https:, but do not necessarily name something which can be retrieved from a web server. Instead, they are URIs where performing resolution results in machine-readable data about the entity being identified.

In summary:

TermUsed for…
URLsIdentifying digital resources and specifying where they can be retrieved from
URIsIdentifying anything, regardless of whether it can be retrieved electronically or not
Linked Open Data URIsIdentifying anything, but in a way which means that descriptive metadata can be retrieved when the URI is resolved

Describing things with triples

Linked Open Data uses the Resource Description Framework (RDF) to convey information about things. RDF is an open-ended system for modelling information about things, which it does by breaking it down into statements (or assertions), each of which consists of a subject, a predicate and an object.

The subject is the thing being described; the predicate is the aspect or attribute of the subject being described; and the object is the description of that particular attribute.

For example, you might want to state that the book with the ISBN 978-1899066100 has the title Acronyms and Synonyms in Medical Imaging. You can break this assertion down into its subject, predicate, and object:

ISBN 978-1899066100Has the titleAcronyms and Synonyms in Medical Imaging

Together, this statement made up of a subject, predicate and object is called a triple (because there are three components to it), while a collection of statements is called a graph.

In RDF, the subject and the predicate are expressed as URIs this helps to remove ambiguity and the risk of clashes so that the data can be published and consumed in the same way regardless of where it comes from or who’s processing it. Objects can be expressed as URIs where you want to assert some kind of reference to something else, but can also be literals (such as text, numeric values, dates, and so on).

Predicates and vocabularies

RDF doesn’t specify the meaning of most predicates itself: in other words, RDF doesn’t tell you what URI you should use to indicate “has the title”. Instead, because anybody can create a URI, it’s entirely up to you whether you invent your own vocabulary when you publish your data, or adopt somebody else’s. Generally, of course, if you want other people to be able to understand your data, it’s probably a good idea to adopt existing vocabularies where they exist.

In essence, RDF provides the grammar, while community consensus provides the dictionary.

One of the most commonly-used general-purpose vocabularies is the DCMI Metadata Terms, managed by the Dublin Core Metadata Initiative (DCMI), and which includes a suitable title predicate:

ISBN 978-1899066100 and Synonyms in Medical Imaging

With this triple, a consuming application that understands the DCMI Metadata Terms vocabulary can process that data and understand the predicate to indicate that the item has the title Acronyms and Synonyms in Medical Imaging.

Because is quite long-winded, it’s common to write predicate URIs in a compressed form, consisting of a namespace prefix and local name—similar to the xmlns mechanism used in XML documents.

Because people will often use the same prefix to refer to the same namespace URI, it is not unusual to see this short form of URIs used in books and web pages. Some common prefixes and namespace URIs are shown below:

For example, defining the namespace prefix dct with a namespace URI of, we can write our predicate as dct:title instead of RDF systems re-compose the complete URI by concatenating the prefix URI and the local name.

Subject URIs

In RDF, subjects are also URIs. While in RDF itself there are no particular restrictions upon the kind of URIs you can use (and there are a great many different kinds — those beginning http: and https: that you see on the Web are just two of hundreds), Linked Open Data places some restrictions on subject URIs in order to function. These are:

  1. Subject URIs must begin with http: or https:.
  2. They must be unique: although you can have multiple URIs for the same thing, one URI can’t refer to multiple distinct things at once.
  3. If a Linked Open Data consumer makes an HTTP request for the subject URI, the server should send back RDF data describing that subject.
  4. As with URLs, subject URIs need to be persistent: that is, they should change as little as possible, and where they do change, you need to be able to make arrangements for requests for the old URI to be forwarded to the new one.

In practice, this means that when you decide upon a subject URI, it needs to be within a domain name that you control and can operate a web server for; you need to have a scheme for your subject URIs which distinguishes between things which are represented digitally (and so have ordinary URLs) and things which cannot; you also need to arrange for your web server to actually serve RDF when it’s requested; and finally you need to decide a form for your subject URIs which minimises changes.

This may sound daunting, but it can be quite straightforward—and shares much in common with deciding upon a URL structure for a website that is intended only for ordinary browsers.

For example, if you are the Intergalactic Alliance Library & Museum, whose domain name is, you might decide that all of your books’ URIs will begin with, and use the full 13-digit ISBN, without dashes, as the key. You could pick something other than the ISBN, such as an identifier meaningful only to your own internal systems, but it makes developers’ lives easier if you incorporate well-known identifiers where it’s not problematic to do so.

Because this web of data co-exists with the web of documents, begin by defining the URL to the document about this book:

Anybody visiting that URL in their browser will be provided with information about the book in your collection. Because the URL incorporates a well-known identifier, the ISBN, if any other pieces of information about the book change or are corrected, that URL remains stable. As a bonus, incorporating the ISBN means that the URL to the document is predictable.

Having defined the URL for book pages, it’s now time to define the rest of the structure. The Intergalactic Alliance Library & Museum web server will be configured to serve web pages to web browsers, and RDF data to RDF consumers: that is, there are multiple representations of the same data. It’s useful, from time to time, to be able to refer to each of these representations with a distinct URL. Let’s say, then, that we’ll use the general form:

In this case, EXT refers to the well-known filename extension for the particular type of representation we’re referring to.

Therefore, the HTML web page for our book will have the representation-specific URL of:

If you also published CSV data for your book, it could be given the representation-specific URL of:

RDF can be expressed in a number of different forms, or serialisations. The most commonly-used serialisation is called Turtle, and typically has the filename extension of ttl. Therefore our Turtle serialisation would have the representation-specific URL of:

Now that we have defined the structure of our URLs, we can define the pattern used for the subject URIs themselves. Remember that the URI needs to be dereferenceable—that is, when a consuming application makes a request for it, the server can respond with the appropriate representation.

In order to do this, there are two options: we can use a special kind of redirect, or we can use fragments. The fragment approach works best where you have a document for each individual item, as we do here, and takes advantage of the fact that in the HTTP protocol, any part of a URL following the “#” symbol is never sent to the server.

Thus, let’s say that we’ll distinguish our URLs from our subject URIs by suffixing the subject URIs with #id. The URI for our book therefore becomes:

When an application requests the information about this book, by the time it arrives at our web server, it’s been turned into a request for the very first URL we defined—the generic “document about this book” URL:

When an application understands RDF and tells the server as much as part of the request, the server can send back the Turtle representation instead of an HTML web page—a part of the HTTP protocol known as content negotiation. Content negotiation allows a server to pick the most appropriate representation for something (where it has multiple representations), based upon the client’s preferences.

With our subject URI pattern defined, we can revisit our original assertion:

SubjectPredicateObject and Synonyms in Medical Imaging

Defining what something is: classes

One of the few parts of the common vocabulary which is defined by RDF itself is the predicate rdf:type, which specifies the class (or classes) of a subject. Like predicates, classes are defined by vocabularies, and are also expressed as URIs. The classes of a subject are intended to convey what that subject is.

For example, the Bibliographic Ontology, whose namespace URI is (commonly prefixed as bibo:) defines a class named bibo:Book (whose full URI we can deduce as being

If we write a triple which asserts that our book is a bibo:Book, any consumers which understand the Bibliographic Ontology can interpret our data as referring to a book:

SubjectPredicateObject and Synonyms in Medical Imaging

Describing things defined by other people

There is no technical reason why your subject URIs must only be URIs that you control directly. In Linked Open Data, the matter of trust is a matter for the data consumer: one application might have a white-list of trusted sources, another might have a black-list of sources known to be problematic, another might have more complex heuristics, while another might use your social network such that assertions from your friends are considered more likely to be trustworthy than those from other people.

Describing subjects defined by other people has a practical purpose. Predicates work in a particular direction, and although sometimes vocabularies will define pairs of predicates so that you can make a statement either way around, interpreting this begins to get complicated, and so most vocabularies define predicates only in one direction.

As an example, you might wish to state that a book held in a library is about a subject that you’re describing. On a web page, you’d simply write this down and link to it—perhaps as part of a “Useful resources” section. In Linked Open Data, you can make the assertion that one of the subjects of the other library’s book is the one you’re describing. This works exactly the same way as if you were describing something that you’d defined yourself—you simply write the statement, but somebody else’s URI as the subject.

This can also be used to make life easier for developers and reduce network overhead of applications. In your “Useful resources” section, you probably wouldn’t only list the URL to the page about the book: instead, you’d list the title and perhaps the author as well as linking to the page about the book. You can do that in Linked Open Data, too. Let’s say that we’re expressing the data about a subject—Roman Gaul—which we’ve assigned a URI of

SubjectPredicateObject Gaul the Gaul

In this example we’ve defined a subject, called Roman Gaul, of which we’ve provided very little detail, except to say that it’s a subject of the book Asterix the Gaul, whose identifier is defined by the British Library.

Note that we haven‘t described the book Asterix the Gaul in full: RDF operates on an open world principle, which means that sets of assertions are generally interpreted as being incomplete—or rather, only as complete as they need to be. The fact that we haven’t specified an author or publisher of the book doesn’t mean there isn’t one, just that the data isn’t present here; where in RDF you need to state explicitly that something doesn’t exist, there is usually a particular way to do that.

Turtle: the terse triple language

Turtle is one of the most common languages for writing RDF in use today—although there are many others. Turtle is intended to be interpreted and generated by machines first and foremost, but also be readable and writeable by human beings (albeit usually software developers).

In its simplest form, we can just write out our statements, one by one, each separated by a full stop. URIs are written between angle-brackets (< and >), while string literals (such as the names of things) are written between double-quotation marks (").

<> <> <> .
<> <> "Acronyms and Synonyms in Medical Imaging" .

This is quite long-winded, but fortunately Turtle allows us to define and use prefixes just as we have in this book. When we write the short form of a URI, it’s not written between angle-brackets:

@prefix rdf: <> .
@prefix dct: <> .
@prefix bibo: <> .

<> rdf:type bibo:Book .

<> dct:title "Acronyms and Synonyms in Medical Imaging" .

Because Turtle is designed for RDF, and rdf:type is defined by RDF itself, Turtle provides a nice shorthand for the predicate: a. We can simply say that our book is a bibo:Book:

@prefix rdf: <> .
@prefix dct: <> .
@prefix bibo: <> .

<> a bibo:Book .

<> dct:title "Acronyms and Synonyms in Medical Imaging" .

Writing the triples out this way quickly gets repetitive: you don’t want to be writing the subject URI every time, especially not if writing Turtle by hand. If you end a statement with a semi-colon instead of a full-stop, it indicates that what follows is another predicate and object about the same subject:

@prefix rdf: <> .
@prefix dct: <> .
@prefix bibo: <> .

	a bibo:Book ;
	dct:title "Acronyms and Synonyms in Medical Imaging" .

Turtle includes a number of capabilities which we haven’t yet discussed here, but are important for fully understanding real-world RDF in general and Turtle documents in particular. These include:

Typed literals

Typed literals: literals which aren’t simply strings of text, but can be of any one of the XML Schema data types.

Literal types are indicated by writing the literal value, followed by two carets, and then the datatype URI: for example, "2013-01-26"^^xsd:date.

Blank nodes

Blank nodes are entities for which some information is provided, but where the subject URI is not known. There are two different ways of using blank nodes in Turtle: a blank node value is one where in place of a URI or a literal value, an entity is partially described.

Another way of using blank nodes is to assign it a private, transient identifier (a blank node identifier), and then use that identifier where you’d normally use a URI as a subject or object. The transient identifier has no meaning outside of the context of the document: it’s simply a way of referring to the same (essentially anonymous) entity in multiple places within the document.

A blank node value is expressed by writing an opening square bracket, followed by the sets of predicates and values for the blank node, followed by a closing square bracket. For example, we can state that an author of the book is a nondescript entity who we know is a person named Nicola Strickland, but for whom we don’t have an identifier:

<> dct:creator [
	a foaf:Person ;
	foaf:name "Nicola Strickland" 
] .

Blank node identifiers are written similarly to the compressed form of URIs, except that an underscore is used as the prefix. For example, _:johnsmith. You don’t have to do anything special to create a blank node identifier (simply use it), and the actual name you assign has no meaning outside of the context of the document—if you replace all instances of _:johnsmith with _:zebra, the actual meaning of the document is unchanged—although it may be slightly more confusing to read and write as a human.

Multi-lingual string literals

String literals in the examples given so far are written in no particular language (which may be appropriate in some cases, particularly when expressing people’s names).

The language used for a string literal is indicated by writing the literal value, followed by an at-sign, and then the ISO 639-1 language code, or an ISO 639-1 language code, followed by a hyphen, and a ISO 3166-1 alpha-2 country code.

For example: "Intergalatic Alliance Library & Museum Homepage"@en, or "grey"@en-gb.

Base URIs

By default, the base URI for the terms in a Turtle document is the URI it’s being served from. Occasionally, it can be useful to specify an alternative base URI. To do this, an @base statement can be included (in a similar fashion to @prefix).

For example, if a document specifies @base <> ., then the URI <12447652#id> within that document can be expanded to <>, while the URI </artefacts/47fb01> would be expanded to <>.

An example of a Turtle document making use of some of these capabilities is shown below:

@base <> .
@prefix rdf: <> .
@prefix dct: <> .
@prefix bibo: <> .
@prefix foaf: <> .

	a bibo:Book ;
	dct:title "Acronyms and Synonyms in Medical Imaging"@en ;
	dct:issued "1997"^^xsd:gYear ;
	dct:creator _:allison, _:strickland ;
	dct:publisher [
		a foaf:Organization ;
		foaf:name "CRC Press"
	] .

	a foaf:Person ;
	foaf:name "Nicola Strickland" .

	a foaf:Person ;
	foaf:name "David J. Allison" .

In this example, we are still describing our book, but we specify that the title is in English (though don’t indicate any particular national variant of English); we state that it was issued (published) in the year 1997, and that it’s publisher—for whom we don’t have an identifier—is an organisation whose name is CRC Press.

From three to four: relaying provenance with quads

While triples are a perfectly servicable mechanism for describing something, they don’t have the ability to tell you where data is from (unless you impose a restriction that you only deal with data where the domain of the subject URI matches that of the server you’re retrieving from). In some systems, including Acropolis, this limitation is overcome by introducing another element: a graph URI, identifying the source of a triple. Thus, instead of triples, the Research & Education Space actually stores quads.

When we assign an explicit URI to a graph in this way, it becomes known as a named graph—that is, a graph with an explicit identifier (name) assigned to it.

Turtle itself doesn’t have a concept of named graphs, but there is an extension to Turtle, named TriG, which includes the capability to specify the URI of a named graph containing a particular set of triples.

Why does the Research & Education Space use RDF?

RDF isn’t necessarily the simplest way of expressing some data about something, and that means it’s often not the first choice for publishers and consumers. Often, an application consuming some data is designed specifically for one particular dataset, and so its interactions are essentially bespoke and comparatively easy to define.

The Research & Education Space, by nature, brings together a large number of different structured datasets, describing lots of different kinds of things, with a need for a wide range of applications to be able to work with those sets in a consistent fashion.

At the time of writing (ten years after its introduction), RDF’s use of URIs as identifiers, common vocabularies and data types, inherent flexibility and well-defined structure means that is the only option for achieving this.

Whether you’re describing an audio clip or the year 1987, a printed book or the concept of a documentary film, RDF provides the ability to express the data you hold in intricate detail, without being beholden to a single central authority to validate the modelling work undertaken by experts in your field.

For application developers, the separation of grammar and vocabularies means that applications can interpret data in as much or as little detail as is useful for the end-users. For instance, you might develop an application which understands a small set of general-purpose metadata terms but which can be used with virtually everything surfaced through the Research & Education Space.

Alternatively, you might develop a specialist application which interprets rich descriptions in a particular domain in order to target specific use-cases. In either case, you don’t need to know who the data comes from, only sufficient understanding of the vocabularies in use to satisfy your needs.

However, because we recognise that publishing and consuming Linked Open Data as an individual publisher or application developer may be unfamiliar territory, and so throughout the lifetime of the project we are committed to publishing documentation, developing tools and operating workshops in order to help developers and publishers work more easily with RDF in general and the Research & Education Space in particular.

Requirements for publishers

Publishers wishing to make their data visible in the Acropolis index and useable by RES applications must conform to a small set of basic requirements. These are:

Although RES requires that you publish Linked Open Data, that doesn’t mean you can’t also publish your data in other ways. While human-facing HTML pages are the obvious example, there’s nothing about publishing Linked Open Data which means you can’t also publish JSON with a bespoke schema, CSV, spreadsheets, or operate complex query APIs requiring registration to use.

In fact, best practice generally is that you publish in as many formats as you’re generally able to, and do so in a consistent fashion. And, while your “data views” (that is, the structured machine-readable representations of your data about things) are going to be very dull and uninteresting to most human beings, that doesn’t mean that you can’t serve nicely-designed web pages about them as the serialisation for ordinary web browsers.

Checklist for data publication

Support the most common RDF serialisations

RDF can be serialised in a number of different ways, but there are two serialisations which RES publishers must provide because these are the two serialisations guaranteed to be supported by RES applications:

NameMedia typeFurther information

Turtle is increasingly the most common RDF serialisation in circulation and is very widely-supported by processing tools and libraries.

RDF/XML is an older serialisation which is slightly more well-supported than Turtle. RDF/XML is often more verbose than the equivalent Turtle expression of a graph, but as an XML-based format can be generated automatically from other kinds of XML using XSLT.

If you are considering publishing your data as JSON, you may consider publishing it as JSON-LD, a serialisation of RDF which is intended to be useful to consumers which don’t understand RDF specifically. JSON-LD isn’t currently supported by RES, but may be in the future.

Describe the document and serialisations as well as the item

A minimal RDF serialisation intended for use by RES must include data about three distinct subjects:

Document URL
Representation URL
Item URI

It is recommended that publishers describe any other serialisations which they are making available as well, but it is not currently a requirement to do so.

A description of the metadata which should be served about the document and representations is included in the Metadata about documents section.

Include licensing information in the data

The data about the representation must include a rights information triple referring to the well-known URI of a supported license. See the Metadata describing rights and licensing section for further details.

Perform content negotiation when requests are received for item URIs

If you use fragment-based URIs, this means that your web server must be configured to perform content negotiation on requests received for the portion of the URI before the hash (#) sign.

For example, if your subject URIs are in the form:

Then when your server receives requests for the document:


It should perform content negotiation and return an appropriate media type, including the supported RDF serialisations if requested.

When sending a response, the server must send an appropriate Vary header, and should send a Content-Location header referring to the representation being served. For example:

Server: Apache/2.2 (Unix)
Vary: Accept
Content-Type: text/turtle; charset=utf-8
Content-Location: /books/9781899066100.ttl
Content-Length: 272

Editorial Guidelines for Content Providers

What do we mean by “editorial”?

In this context we mean what is in the metadata and the associated media, such as text, video or images.

  • What does it say and what is it about?
  • Is it suitable for all ages to see and hear?
  • Are there any limits you would want to set around who could see this material?

When making metadata and media available in to education, it is important to understand the expectations of the users in terms of what they will see and hear.

These guidelines are intended to help content providers think about these issues as early in the process as possible.

The RES platform is funded with public money and needs to show that it is serving the public interest and behaving responsibly.

  • Some items in physical collections are only available to certain users.
  • How is this information transferred to the online catalogue?
  • Are there items in your collections which you believe are not suitable for under-18s?
  • How will you help end users know this?
  • The RES proposal intends that in schools, the primary users of the products built on the RES aggregator will be teachers.
  • But teachers are over-worked and are more likely to use your material if it is easy and quick to identify as relevant to their students.
  • If you hold any data or guidance on age suitability you should include this in the data you publish.
  • Users will be able to feedback to you about concerns with the metadata or assets, including possible breach of copyright – how will you as an institution manage this?
  • Although you will probably already have a mechanism for dealing with feedback and/or requests of either a legal (copyright, data protection etc) or editorial nature, it is worth being aware that RES may expose your material to a wider audience and these requests may therefore increase. Can your existing workflows manage this?
  • In sharing data and assets are you comfortable that you are complying with the Data Protection Act.

Requirements for consuming applications

Applications built for RES must be able to understand the index assembled by Acropolis, as well as the source data it refers to. Practically, it means that they must be able to retrieve and process RDF from remote web servers and interpret at least the common metadata vocabularies described in this book which are relevant to the consuming application.

Retrieving and processing Linked Open Data

In a perfect world, consuming Linked Open Data is as straightforward as:—

  • Make a request for the URI you want to get data about, sending an Accept HTTP request header containing the MIME types of the formats you support in your application.
  • Parse the data in the response using an RDF parser.
  • Examine the parsed data to find triples whose subject is the URI that you started with.

While this process is simple, and could be implemented using virtually any HTTP client in common use today, it brings about a few questions. How do you deal with redirects? What happens if the server doesn't return the data in the format that you asked for? Where do you start?

This chapter aims to answer all of these questions so that your RES application can be both useful and robust in face of real-world challenges.

Consuming Linked Open Data in detail

As part of the RES project, we are developing a Linked Open Data client library. Although at present this library is currently only available to low-level languages such as C and C++, the process it follows can be implemented in any language. It is intended to be a liberal consumer which can deal with real situations, such as different kinds of redirects and content negotiation failing or being disabled by the publisher.

The algorithm is as follows (implemented in the LOD library in fetch.c):—

  1. Optionally, check if data about the request-URI is present in our RDF model: if so, return a reference to it.
  2. Append request-URI to subject-list.
  3. If request-URI has a fragment, remove it and store it as fragment.
  4. Set followed-link to false, and count to 0.
  5. If count is more than our configured max-redirects value, return an error status indicating that the redirect limit has been exceeded.
  6. Create an HTTP request for request-URI, setting the Accept header based upon the data formats supported by the application. Note that RES requires publishers and applications to support at least RDF/XML (application/rdf+xml) and Turtle (text/turtle), but both clients and servers may support other formats which can be negotiated.
  7. Perform the HTTP request. Note that this should be a single request-response pair, and not automatically follow redirects.
  8. If a low-level error in performing the request occurred (such as the hostname in the URI not being resolveable), return an error status indicating that the request could not be performed.
  9. Store the canonicalised form of request-URI as the base.
  10. Obtain the Content-Type of the response, if any, and store it in content-type.
  11. If the HTTP status code is between 200 and 299 and there is a document body:—

    1. If content-type is not set, return an error status indicating that no suitable data could be found.

      If the Content-Type is not one of text/html, application/xhtml+xml, application/vnd.wap.xhtml+xml, application/vnd.ctv.xhtml+xml or application/vnd.hbbtv.xhtml+xml, then skip to step 14.

    2. If followed-link is true, return an error status indicating that a <link rel="alternate"> has already been followed.
    3. Parse the returned document as HTML, and extract any <link> elements within <head> which have a type and href attributes and a rel attribute with a value of alternate.
    4. If no suitable <link> elements were found, return an error status indicating that no suitable data could be found.
    5. Rank the returned links based upon the application's weighting values (allowing an application to consume a particular serialisation if available in preference to others).
    6. Append the highest ranked link’s URI (that is, the value of the href attribute) to subject-list, set request-URI to it, set followed-link to true, increment count, and skip back to step 5.
  12. If the HTTP status code is between 300 and 399:—

    1. Set target-URI to the redirect target (the Location header of the HTTP response). If no target is available, return with an error status indicating that an unsuitable HTTP status was returned.
    2. If the HTTP status code is 303, set request-URI to target-URI, increment count and skip back to step 5.
    3. If fragment is set, append it to target-URI, replacing any fragment which might be present already.
    4. Push target-URI onto subject-list, increment count, and skip back to step 5.
  13. If the HTTP code is not between 200 and 399, return an error status indicating that an HTTP error was returned by the server.
  14. Optionally, if content-type is text/plain, application/octet-stream or application/x-unknown, attempt to determine a new content type via content sniffing. If successful, store the new type in content-type.
  15. Parse the document body as content-type into our RDF model. If the type is not supported, or parsing fails for any other reason, return an error status.
  16. Starting with the first item in subject-list:—

    1. Set subject-URI to the current entry in the list.
    2. Perform a query against the RDF model to determine whether any triples whose subject are subject-URI exist.
    3. If triples were found, return a reference to them.
    4. Otherwise, move to the next item in subject-list.
  17. Finally, return an error status indicating that no triples were found in the retrieved data.

A starting point: the RES index

Just as an ordinary web browser needs a homepage or an address bar, so too do Linked Open Data applications. Whether your application has a fixed configured starting point or is intended to be an open-ended “data browser”, the RES index is intended to be a useful Linked Open Data home for many applications.

Described in more detail in The RES API: the index and how it’s structured, the index is itself Linked Open Data which can be retrieved and processed using the algorithms described above. The URI for the index is currently, and this URI can be used as default “homepage” for RES applications.

In the same way that a homepage only provides the starting point for a web browser, the same is true of the RES index: applications can allow users to explore and search the index, but to also follow the onward links to source data and media assets.

For some applications, use the RES index as a starting point won’t be appropriate: it may be necessary or useful to implement an intermediary service that provides additional capabilities or a specific curated subset of resources. There is no requirement that RES applications must directly use the base of the RES index as their home.

Editorial Guidelines for Product Developers

What do we mean by “editorial”?

In this context we mean what is in the metadata and the associated media, such as text, video or images.

  • What does it say and what is it about?
  • Is it suitable for all ages to see and hear?
  • Are there any limits you would want to set around who could see this material?

When making metadata and media available in to education, it is important to understand the expectations of the audience in terms of what they will see and hear.

These guidelines are intended to help product developers think about these issues as early in the design and development process as possible.

The RES platform is funded with public money and needs to show that it is serving the public interest and behaving responsibly.

  • The RES project envisages that in schools and FE colleges it will be teachers who are the primary users of the products built on top of the RES platform, both the catalogue and the assets.
  • Teachers will then judge the suitability of the content for particular age ranges and make it available to pupils.
  • The pupils and students will therefore be the secondary users of any products, accessing a moderated version of the whole platform.
  • Teachers will need to share material with pupils and other teachers and this functionality will be vital.
  • Where possible the metadata will include any guidance as to the suitability of the content for particular age groups, for example the BBC would include Guidance warning metadata.
  • How this will be displayed to teachers is an important consideration in the design of products and services.
  • However where no such information is available, it needs to be clear that this does not mean that the material is necessarily suitable for all ages (so perhaps a “no age range given” tag is appropriate?)
  • The RES project will provide teachers with guidelines about the range of material available in RES and hints on how to navigate and mediate such a large volume of metadata and media.
  • Teachers will also form their own view of what material is suitable for whom, and their ability to add that information to the metadata and share it is important.
  • Every product or service built on the RES platform must have a means of feeding back any concerns about aspects of the assets or the metadata to the provider of the catalogue and assets.

The Research & Education Space API: the index and how it’s structured

Vocabularies used in this section:

VocabularyNamespace URIPrefix
RDF schema
RDF syntax
XHTML Vocabulary

At the core of the platform is the Resarch & Education Space index. This index is available as web pages (to make it easier for application developers to see what’s there and how it works), but is primarily published as Linked Open Data. Accessing the index and requesting machine-readable data is the Research & Education Space API.

The index takes the form of a void:Dataset, and the operations that you might perform against the index will often be applicable to other datasets that you might encounter.

Depending upon your application design, it may be desirable to offer the same browse and query capabilities to any dataset that the user navigates to, rather than hard-coding behaviour specific to the index.

Discovering capabilities

As the index is presented as Linked Open Data, discovering information about it is the same process used for obtaining descriptive metadata for anything else: de-reference the entity URI (which in the case of the index is the API root—currently, and examine the triples whose subject is that URI.

CapabilityExpressed using…
Class partitions (e.g., “all people”, “all places”)void:classPartition
Browse endpoint for everything in the indexvoid:rootResource
Locate an entry from an external URIvoid:uriLookupEndpoint
Free-form search (complete description document)void:openSearchDescription
Free-form search URL templateosd:template
Links to entities contained within the indexrdfs:seeAlso
References to original source data about an entity in the indexowl:sameAs
Links to first, last, previous and next pages of resultsxhtml:first, xhtml:last, xhtml:prev, xhtml:next

Structure of the index

The index is made up of a series of composite entities which are constructed using the data discovered by the crawler. Each of the composite entities has an owl:sameAs relationship with the various source entities used to construct it, a portion of whose data is cached in the index.

If you dereference the URI for the index, the result is some metadata about the index itself, including information about how to perform different kinds of query, the different browseable partitions, and some selected sample entities.

When a query is performed against the index (i.e., by adding some query parameters to the URI), the result is a small amount of metadata about the query and the results along with a list of these composite entities.

If you then dereference one of these entities—drilling down into it—the document returned will contain both the composite entity, and the cached data about the source entities. If the entity references, or is referenced by other entities, the relevant composite entities are also included.

Common API operations

Below is a list of some of the most common kinds of operation an application might wish to perform against the index. Note that these operations can apply to any dataset.

Determine the kind of entity that retrieved data describesExamine the rdf:type properties and compare against the class index.
Locate class partitionsIterate the void:classPartition properties of the index
Find the index entry for a particular entityAppend the encoded entity URI to the value of the void:uriLookupEndpoint property
Perform a text queryPopulate the template specified in the osd:template property (if present), or alternatively the template specified in the <Url> element corresponding to the desired data format in the OpenSearch Description document linked via the void:openSearchDescription property
Locate the source data for an entityOnce the data for an entity has been retrieved, find the owl:sameAs triples which have the entity URI as either the subject or the object
List the items in the dataset or a partitionRetrieve the data either from the URL in the void:rootResource property, from one of the void:classPartition properties, or a query, then locate all of the rdfs:seeAlso properties which have that URL as a subject.
Paginate through a dataset or query resultsFollow the xhtml:prev and xhtml:next properties where available

Describing and consuming data

The Research & Education Space works on the basis that publishers of data consumed by the platform and product developers using the API both use the same sets of terms (vocabularies). If a publisher uses a predicates tha aren't supported by a product, then that data is essentially invisible to that product, resulting in a poor user experience.

Rather than attempt to describe a vocabulary which includes the classes and predicates needed to describe any conceivable kind of entity, the approach taken by the Research & Education Space is to identify those which are already in common usage and document them: for many publishers, this means that what they are already doing is entirely sufficient (or requires only very minor changes), while for product developers the amount of special-casing required to deal with data indexed by the Research & Education Space platform is kept to a minimum.

This chapter contains the recommended approaches to describing different kinds of entity. Data publishers should read it as our recommendations for how those entities should be described (although publishers are always free to include additional data not covered by our recommendations). Product developers should read the chapter as being a description of how they ought to expect the data which is surfaced by the platform to be structured.

Common metadata

Vocabularies used in this section:

VocabularyNamespace URIPrefix
RDF syntax
RDF schema
DCMI terms

Dublin Core Metadata Initiative (DCMI) Terms is an extremely widely-used general-purpose metadata vocabulary which can be used in the first instance to describe both web and abstract resources.

In particular, the following predicates are recognised by Acropolis itself and may be relayed in the RES index:

dct:titleSpecifies the formal title of an item
dct:rightsSpecifies a URI for rights information (see Metadata describing rights and licensing)
dct:licenseAlternative predicate for specifying rights information
dct:subjectSpecifies the subject of something

The FOAF vocabulary also includes some general-purpose predicates:

foaf:primaryTopicSpecifies the primary topic of a document
foaf:homepageSpecifies the canonical homepage for something
foaf:topicSpecifies a topic of a page (may be used instead of dct:subject)
foaf:depictionSpecifies the URL of a still image which depicts the subject

Referencing alternative identifiers: expressing equivalence

Vocabularies used in this section:

VocabularyNamespace URIPrefix

Linked Open Data in general, and the Research & Education Space in particular, is at its most useful when the data describing things links to other data describing the same thing.

In RDF, this is achieved using the owl:sameAs predicate. This predicate implies a direct equivalence relationship—in effect, it creates a synonym.

You can use owl:sameAs whether or not the alternative identifiers use http: or https:, although the usefulness of URIs which aren't resolveable is limited.

For example, one might wish to specify that our book has an ISBN using the urn:isbn: URN scheme [RFC3187]:

</books/9781899066100#id> owl:sameAs <urn:isbn:9781899066100> .

We can also indicate that the book described by our data refers to the same book at the British Library:

</books/9781899066100#id> owl:sameAs <> .

Metadata describing rights and licensing

Vocabularies used in this section:

VocabularyNamespace URIPrefix
DCMI terms
Creative Commons REL
ODRL 2.1
ODRS Vocabulary

The data describing digital assets (including RDF representations themselves) must be made available under the terms of a supported license and include explicit licensing data in order for it to be indexed by the Research & Education Space and be useable by applications. Our approach is aligned with the Open Data Institute’s guide to publishing machine-readable rights data.

Incorporating rights information into RDF data

In many cases, the simplest way of expressing rights information is to include it in the data that you're publishing, and this can often be accomplished by adding a single triple to each published resource:

<> dct:license <> .

This example assumes that you do not set @base, or if present, that you don't set it to be anything other than the document’s own URI. If you do, you will need to be more specific in the subject of your licensing triple.

It is important that the subject of this triple is the URI of the concrete document. If you have different URIs for each RDF representation, and either send a Content-Location header or redirect to them, you need to ensure that the subject of the licensing triple is representation-specific URI (e.g., /books/9781899066100.ttl).

This is because the Research & Education Space crawler is stateless, just like the underlying HTTP protocol itself. In practice, this means that when a document is being processed by the crawler, the only information which can be used to evaluate it is the request-URI, the Content-Location (if provided), any Link headers that were sent, and the serialised RDF itself.

The Research & Education Space crawler understands several common predicates for expressing the well-known URI of the license of a document:

If you need to, you can provide more information than the license triple alone. For example, you might include a request which is not a formal requirement of the licensing terms, but you would like consumers to adhere to if possible.

One way to do this is to include a dct:license triple referring to the well-known license URI, alongside a dct:rights triple pointing to a locally-defined odrs:RightsStatement entity (described using the Open Data Rights Statement Vocabulary).

The following example specifies that the Turtle representation of the data about our book is licensed according to the terms of the Creative Commons Attribution 4.0 International licence, using the host-relative URI identifying the specific representation.

</books/9781899066100.ttl> dct:license <> .

See the Metadata describing documents section for further details on describing individual RDF representations.

Describing conditionally-accessible resources

Vocabularies used in this section:

VocabularyNamespace URIPrefix
Access Control ontology

Many kinds of digital asset are not available to the general public but may be accessed by the RES audience: students and teachers affiliated with a recognised educational institution in the UK. This may be because specific exceptions in law allow access when it would not otherwise be possible, or because the rights-holder has elected to make the assets available only to those in education.

In order to support this, and ensure that users of RES applications are able to use to the greatest range of material that they legitimately have access to, the metadata describing those assets which aren’t available to the public but are to educational users must describe means by which they are accessed.

This section will be expanded significantly in future editions.

Describing digital assets

When we talk about “digital assets”, we mean digital resources that users can stream or download to their devices: video, audio and image files, web pages, and other kinds of document.

In many cases, these digital assets are a manifestation of a creative work which should be described separately and related to the assets.

Metadata describing documents (including RDF serialisations)

Vocabularies used in this section:

VocabularyNamespace URIPrefix
RDF syntax
DCMI terms
DCMI types
Media types
W3C formats registry

Describing your document

Give the document a class of foaf:Document:

</books/9781899066100> a foaf:Document .

Give the document a title:

</books/9781899066100> dct:title "Information about 'Acronyms and Synonyms in Medical Imaging' at the Intergalatic Alliance Library & Museum"@en .

If the document is not a data-set, specify the primary topic (that is, the URI of the thing described by the document):

</books/9781899066100> foaf:primaryTopic </books/12345#id> .

Link to each of the serialisations:

</data/9781899066100> dct:hasFormat </data/9781899066100.ttl> .
</data/9781899066100> dct:hasFormat </data/9781899066100.html> .

Describe each of your serialisations

Use a member of the DCMI type vocabulary as a class:

</books/9781899066100.ttl> a dcmit:Text .

Where available, use a member of the W3C formats vocabulary as a class:

</books/9781899066100.ttl> a formats:Turtle .

Use the dct:format predicate, along with the MIME type beneath the tree:

</books/9781899066100.ttl> dct:format <> .

Give the serialisation a specific title:

</books/9781899066100.ttl> dct:title "Description of 'Acronyms and Synonyms in Medical Imaging' as Turtle (RDF)"@en .

Specify the licensing terms for the serialisation, if applicable:

</books/9781899066100.ttl> dct:rights <> .

See the Metadata describing rights and licensing section for details on the licensing statements required by RES, as well as information about supported licences.


@prefix rdf: <> .
@prefix dct: <> .
@prefix dcmit: <> .
@prefix foaf: <> .
@prefix formats: <> .

	a foaf:Document ;
	dct:title "Information about 'Acronyms and Synonyms in Medical Imaging' at the Intergalatic Alliance Library & Museum"@en .
	foaf:primaryTopic </books/12345#id> ;
		</data/9781899066100.ttl> ,
		</data/9781899066100.html> .

	a dcmit:Text, formats:Turtle ;
	dct:format <> ;
	dct:title "Description of 'Acronyms and Synonyms in Medical Imaging' as Turtle (RDF)"@en ;
	dct:rights <> .

	a dcmit:Text ;
	dct:format <> ;
	dct:title "Description of 'Acronyms and Synonyms in Medical Imaging' as a web page"@en .

Collections and data-sets

Vocabularies used in this section:

VocabularyNamespace URIPrefix
DCMI terms

Data-set auto-discovery

Audio, video, and images

Vocabularies used in this section:

VocabularyNamespace URIPrefix
DCMI terms
DCMI types
Media RSS
Media types

Describing your media assets (or, in the case of embeddable players, the pages hosting them) allows applications to properly surface the correct media for a user. You can do this even if your media assets are hosted entirely separately from your data and other web pages.

  • Because you are describing directly-retrievable resources, the subject URIs are the actual URLs of your assets.
  • Use members of the DCMI Type Vocabulary as the classes of your assets: dcmit:MovingImage, dcmit:Sound, and so on.
  • For each asset, include a dct:format triple referring to the entry in the media types vocabulary matching the MIME type of the resource. For example, an MP4 audio file has a MIME type of audio/mp4, and so the dct:format would be
  • Add triples to the RDF describing the creative work that this asset is a representation of: mrss:player (for embedded player pages), foaf:page (for stand-alone playback pages) or mrss:content (for directly-accessible media).
  • Add foaf:primaryTopic triples to the data describing your assets, referring to the most specific creative work entity that the asset represents.
  • Use a rights statement predicate to link to an RDF policy document describing who is able to access the media asset if it is not generally-available, or use the well-known URI of a well-known license if the asset is available according to those terms.


For example, you might describe an episode of a television programme, using the programmes ontology, with the following:—

@prefix rdf: <> .
@prefix dct: <> .
@prefix dcmit: <> .
@prefix foaf: <> .
@prefix formats: <> .
@prefix po: <> .

	a po:Episode ;
	dct:title "Trampolining"@en-gb ;
	po:version </programmes/b04bndb9#programme> ;
	mrss:player <> .
	a po:Version ;
	rdfs:label "An episode of "Nina and the Neurons: Get Sporty - Trampolining"@en-gb ;
	po:aspect_ration "16:9" ;
	po:sound_format "Stereo" ;
	po:duration 900 .

	a dcmit:MovingImage ;
	dct:format <> ;
	dct:title "Nina and the Neurons: Get Sporty - CBeebies - 2015-05-27"@en-gb ;
	foaf:primaryTopic </programmes/b04bndb9#programme> ;
	dct:license <> .

This data describes an episode of Nina the Neurons, a specific version of that episode, and an embeddable web player for that version which is subject to access controls.

Note that in practice this data might be split over several RDF resources. However, if the specified media asset URL does not perform content negotiation and result in an RDF description of itself, you should include the data about the asset in the same resource that you describe the creative work that it is a representative of.

Describing concepts and taxonomies

Vocabularies used in this section:

VocabularyNamespace URIPrefix

Publishing digital media

The Research & Education Space will not directly consume or publish digital media (audio, video, images, documents) itself. However, it will aggregate data about digital media which has been published in a form which can be used consistently by applications built on the platform.

This chapter describes how those media assets can be published in ways which will be most useful to applications, while balancing the range of access mechanisms and rights restrictions applicable to users in educational settings.

While this chapter provides guidance on publishing media assets themselves, those assets only become useful to within the Research & Education Space when they are properly described in accompanying metadata. For more information on publishing data which describes digital media assets, please refer to the chapter Describing digital assets.

Approaches to publication

There are three strategies for publishing media for the Research & Education Space: publishing “raw” media assets, providing embeddable players, and publishing pages which include playback capabilities.

Publishing media directly

Publishing media directly is most suited to situations where the media assets are openly-licensed and can be both downloaded and streamed by applications. It is not suitable for media which is rights-restricted to the extent that downloads are not permitted.

Direct publishing allows an application to make use of native playback, viewing, editing, and tagging capabilities, and consequentially offers the greatest level of flexibility to applications and users alike. While it provides no technical barrier to end-users sharing downloaded media (in whole or part on its own, or combined into a larger composition), it does not automatically imply that sharing is permitted.

While affording the greatest level of flexibility to the consuming application, publishing media in this way is also the simplest from a technical perspective: the encoded media files are simply uploaded to a web server and then described in the accompanying metadata.

Use direct publication where:—

  • Licensing allows both streaming and download of the media asset.
  • If you want to allow snipping or other kinds of editing of the media.
  • You want to provide the widest possible range of device support.

For example:—

Embeddable players

Embeddable players are best suited to situations where media files should not be downloaded by applications and end-users, but the playback capability may be provided in-line with other content by an application.

With an embeddable player, although media assets themselves are published in some fashion, the resource described in accompanying metadata is a web page capable of playing them, typically via an <iframe> or equivalent, with the metadata including the preferred dimensions of the frame.

This approach limits the capabilities which can be offered by the application to its users: as far as the application is concerned, the contents of the framed web page are completely opaque; it can only assume that the page will provide a suitable player for the media asset, and will have no control over playback.

Use an embeddable player where:—

  • Licensing only permits streaming of the asset, but does allow its presentation as part of a larger body of content (for example, within in a MOOC).
  • Media is only available through a technology which may not be widely supported except through a custom player.
  • Your media is published through a third party solution which does not provide ready access to direct media asset URLs.
  • As a fall-back option alongside a direct media link (for example, to enable an application to generate the embeddable player code snippet for pasting into a MOOC or social network).

For example:—

Media asset URL//
MIME typetext/html
Poster image URL//
Preferred width500px
Preferred height281px
TitleMount Piños Astrophotography Time Lapse
LicenseCreative Commons 3.0 Unported (CC BY 3.0)

Stand-alone playback pages

Stand-alone playback pages provide the least flexibility to applications, and—depending upon presentation—may result in reduced visibility of your media.

With this strategy, an application is not able to embed your media at all, but instead must navigate to the page that you provide in a browser window. The application might provide a thumbnail or text link to your playback page, or it might choose to omit the media altogether if including it would result in a poor user experience.

Use a stand-alone playback page where:—

  • Licensing restrictions mean that you’re not able to authorise any kind of embedding.
  • As a fall-back option alongside an embeddable player or direct media links (particularly if you already publish a playback page for each media asset).

For example:—

Media asset URL
TitleHorizon: 1981-1982: The Race to Ruin
Geographical restrictionUK-only

Access control and media availability

A key aim of the Research & Education Space is to increase the visibility of and access to digital media resources which are available to staff and students of educational establishments within the United Kingdom. While this naturally includes the wealth of resources which are openly-licensed and available to everybody, it also includes digital media which can only be accessed at scale by UK educational users.

In order to provide access to this material, publishers typically implement some kind of access control. While the platform itself is generally agnostic to media assets and their access-control mechanisms, applications require the ability to make user-interface decisions based upon the access restrictions imposed upon the media.

For this reason, the Research & Education Space defines three specific kinds of access-control mechanism, as well as a policy which conformant media must be published according to. Specifically, this policy is that media assets must:—

  1. Media must be available either freely or under the terms of a blanket or statutorily-backed licensing scheme available to educational establishments (or licenses may be obtained on their behalf by local authorities or central government).
  2. It must be possible to obtain the media without further subscription or other charges, however “value-added” services may be provided which offer additional capabilities (such as archiving, enhanced search), provided those services can be readily subscribed to at an establishment level.
  3. The media must be generally available on a long term basis. Media available only for short periods has limited value in education because it prevents the same resources being used again in the future.
  4. The technical access-control mechansims must be one or more of those described below.
  5. The nature of the access-control mechanism must be described in the metadata accompanying the media.

For example, all of the following conform to the policy:—

  • Media published via Wikimedia Commons is available to everybody on a permanent basis without any additional payment or subscritpion.
  • Programmes which are part of BBC Four Collections are made available to everybody in the UK on a long-term basis (but may not be embedded). Access control is implemented through geo-blocking.
  • Recordings of broadcasts made according to the terms of Section 35 of the Copyright, Designs and Patents Act 1988 (as amended) is may be used by the institution who recorded it (or it was recorded on behalf of), provided their ERA Licence is maintained.
  • Services which are authorised by ERA to maintain an archive of Section 35 recordings and make them available to ERA Licence-holders who pay a subscription fee, provided access is through a mechanism described below.
  • A consortium of rights-holders who together define a scheme for access to one or more sets of media on an affordable establishment-level subscription basis, provided access is through a mechanism described below.

For more information about describing rights restrictions and access-control mechansims, see Metadata describing rights and licensing and Describing conditionally-accessible resources.

Geographical restrictions (geo-blocking)

Geo-blocking is the automatic determination of ability-to-access a resource by looking up the end-user’s public IP address against a database correlating IP address ranges with countries. For example, the address is part of a range which is within the UK, whereas is part of a range which is within the US.

Geo-location databases and live services are available both for free and on commercial terms, with varying levels of quality and service assurance.

Geo-blocking should generally be applied only where other access-control mechanisms are not applicable: for example, because a media asset is available to everybody within a particular country.

Federated access control using Shibboleth and the UK Access Management Federation

Shibboleth is a federated authentication single sign-on mechanism which is widely used by providers of materials to provide access only to staff and students of educational establishments.

The UK Access Management Federation, operated by Janet, provides the Shibboleth federation for UK institutions.

Shibboleth-protected resources present a sign-in page to users who are not already authenticated, which makes it suitable for use with both the embeddable player and the stand-alone playback page publication approaches described above.

Shibboleth-based access control is the preferred mechanism for use where media should be made available only to educational users.

IP-based access control

IP-based access control is often the simplest mechanism to implement, as it requires only for the publisher to check the end-user’s public IP address against a white-list and allow or permit access as required.

However, creating and maintaining that white-list can involve significant administrative burden, particularly on a nation-wide basis, and it does not allow ready access to media to remote-working staff and students without their institution providing additional infrastructure such as remote-desktop services and VPNs.

IP-based access control should generally be employed alongside Shibboleth-based authentication, and only for specific institutions which are not able to participate in the UK Acesss Management Federation.

Describing creative works

Vocabularies used in this section:

VocabularyNamespace URIPrefix
RDF syntax
Bibliographic Ontology
DCMI terms
Programmes Ontology

“Creative works” is the broad term used to describe books, magazines, photographs, paintings, music, TV shows, and so on. In the Research & Education Space, a creative work is a distinct entity with its own description separate to any digital assets which are manifestations of creative works. It’s not uncommon for there to be multiple digital manifestations hosted by different organisations of the same creative work.

  • Use the Programmes Ontology to describe television, radio, and online-only programmes, clips and series.
  • Use the Bibliographic Ontology to describe books and periodicals.
  • Use foaf:topic and foaf:primaryTopic as appropriate to relate creative works to the subjects of those works. Where possible, use URIs for terms that are also used by other people, such as WikiData and DBpedia.

    If you have your own subject hierarchy which is also published as Linked Open Data, you can establish topic references into that hierarchy, and then express the equivalence between your terms and those defined by others.

  • Try to express as much detail about your creative works as you feasibly can. In particular, if your works are organised into series, or might have different editions or versions, it's helpful to express this information as it enables more comprehensive user journeys.
  • Follow the patterns described in Describing digital assets to relate specific manifestations of works to the information about the works themselves. This allows interfaces to offer links or embedded playback facilities for media.

Describing physical things

Describing people, projects and organisations

Describing places

Describing events

Under the hood: the architecture of Acropolis

The Acroplolis stack consists of several distinct, relatively simple services. Within the Research & Education Space, they are used together, but each can be deployed independently to suit different applications.

High-level architecture of the Acropolis stack


Quilt is a modular Linked Open Data server. At its simplest, Quilt can serve a directory tree of Turtle files in the various RDF serialisations supported by librdf, but it can be extended with new engines (which can retrieve data from alternative sources), serialisers (which can output the data in different formats), and SAPIs (server interfaces, which receive requests from different sources).

The core of Quilt is libquilt, which is linked into the SAPI that is used to receive requests. libquilt encapsulates request and response data, and implements a common approach to configuration, loading modules, and request processing workflow irrespective of SAPI is in use. Each request follows the following process:

  1. Encapsulate request data obtained from the SAPI (such as the request-URI, and any request headers) along with an empty librdf model which will contain the response data. This encapsulated request-response object is then passed to the engine and then on to the serialiser which generates the response payload.
  2. Perform content negotiation to determine the best response format supported by both the client and the server.
  3. Pass the request to the configured engine for processing: the engine is responsible for populating the RDF model (or returning an error response if it's unable to).
  4. The serialiser for the negotiated response format completes the request by serialising the RDF model in that format.

Quilt includes two SAPIs: a FastCGI interface, which receives requests from any web server supporting the FastCGI interface, and a command-line SAPI which is useful for testing and debugging. New SAPIs can be developed by implementing the Quilt server inteface and linking against libquilt.

Quilt itself comes with engines for obtaining data from files and from a quad-store. In both cases, the engines perform simple translation of the request-URI into a file path or a graph URI and populate the RDF model with the contents of that file or graph.

Engines could be developed which obtain data from any source. For example, the BBC billings data service, operated as part of the Research & Education Space, is implemented as an engine which populates RDF models based upon queries performed against a SQL database.

The Acropolis stack includes another engine, Spindle, which implements the query capabilities provided to applications by the Research & Education Space API.

libquilt itself incorporates a serialiser which will generate output from an RDF model in any format supported by librdf. An additional serialiser is included which can generate HTML from templates written using a subset of the Liquid templating language.


Twine is a simple, modular, queue-driven workflow engine designed for RDF processing. It receives AMQP messages whose payload is a document which can be transformed to RDF and pushed, using SPARQL 1.1 Update into a quad-store. Future versions of Twine may support other queue mechanisms, such as Amazon SQS. More information about using Twine can be found in the manual pages.

Twine is typically operated as a continuously-running daemon, twine-writerd. Each received message must include a content type in its headers, which is used to termine which processing module the message should be routed to.

An internal API allows this basic workflow to be augmented by support for new message types, pre-processors (which can perform early transformation of RDF graphs before normal message processors are invoked), and post-processors (which can perform additional work based upon the final rendition of a graph).

Twine includes processors for TriG and N-Quads (which simply store each named graph within the source data), the GeoNames RDF dump format, and a configurable XSLT processor which applies user-supplied XSL transforms in order to generate RDF/XML from source data in an artbitrary XML format.

A special class of processors, called handlers, allows for a degree of indirection in message processing. Handlers use the contents of a message to retrieve data from another source which can then be passed back to Twine for processing as if it had been received as a message directly.

For example, an S3 handler receives messages whose payload is simply one or more S3 URLs (i.e., URLs in the form s3://bucketname/path/to/resource). Each is fetched in turn, and passed back to Twine for normal processing. The S3 handler works with both Amazon S3 and the Ceph RADOS object gateway.

The Anansi handler is very similar to the S3 handler, but it is designed to process messages containing S3 URLs to objects and extended metadata cached in a bucket by the Anansi web crawler.

Bridges are tools which push messages into the Twine processing queue. A simple example bridge, twine-inject reads from standard input and pushes the contents directly into the queue. An additional bridge is included which queries an Anansi database for newly-cached resources and pushes messages containing their URLs into the processing queue.

For the Research & Education Space, the Spindle module for Twine is responsible for processing RDF crawled by Anansi in order to generate the index.


Anansi is a web crawler, which is used in the Research & Education Space to find and cache Linked Open Data for processing by Twine.

Anansi is implemented as a generic web crawling library, libcrawl, and crawling daemon, crawld. Loadable modules are used to provide support for different cache stores and for processing engines which are able to inspect retrieved resources (and potentially reject them if they do not meet desired criteria), and extract URLs which should be added to the crawler queue.

The daemon is intended to operate in a parallel fashion. Although an instance can be configured to run in a fixed-size cluster, it can also use etcd for dynamic peer discovery. In this dynamic configuration, the cluster can be expanded or contracted at will, with Anansi automatically re-balancing each node when the cluster size changes.

Anansi includes a generic RDF processor, which indiscriminately follows any URIs found in documents which can be parsed by librdf. This is extended by the Linked Open Data module, which requires that explicit open licensing is present and rejects resources which don’t include licensing information, or whose licence is not in the configurable white-list. This module is used by the Research & Education Space to process RDF and reject resources which do not meet the licensing criteria.


Spindle module for Twine

Within the platform, Linked Open Data which has been successfully retrieved by Anansi is cached in a RADOS bucket. The s3:// URLs are passed to to the Twine message queue for processing by Spindle’s module for Twine along with Twine's provided RDF quads processor.

The module includes both a pre-processor and post-processor, and is responsible for implementing the co-reference aggregation, indexing and caching logic of the Research & Education Space.

When first loaded, the module parses and evaluates its rule-base, which specifies how co-referencing predicates should be interpreted, which predicates in source data should be cached, and the relationship between source classes and predicates those incorporated into the aggregate generated entities. See the class and predicate indices for more information on these relationships.

The pre-processor is applied to any data before it is written to platform’s quad-store, and uses the information in the rule-base to remove triples from the graph which should not be cached.

Once the data has been “stripped” by the pre-processor, Twine’s RDF quads processor writes the updated graphs to a quad-store via a SPARQL PUT request. Thus, the quad-store contains a copy of all of the source data which the rule-base specifies should be cached by the platform.

Twine invokes any registered post-processors for each graph which is updated once the update has been completed, and the Spindle module installs a post-processing handler so that it can perform indexing and aggregation when this happens. The post-processing steps are described in the following-sections:—

Co-reference discovery

For each updated graph, Spindle generates a list of co-references using the matching rules specified in the rule-base. To do this, both the source data and existing cached data referring to the subjects are evaluated (that is, the order that the data is processed by Spindle doesn’t matter, which is important because Anansi might encounter it in any order). Where no co-references were found for a particular subject, it's added to the co-reference list as a “dangling reference”.

Next, each entry in the list of co-references is assigned a UUID which is used to form the URI of the entity within the Research & Education Space index.

Where a particular entity is encountered for the first time (either because all of the known co-references are within the graph being processed, or because no co-references were found), a new UUID is generated and assigned to the entity.

Where the newly-discovered co-references refer only to the same existing entity (and possibly to other entities about which there is no existing data), the existing entity’s UUID is simply assigned to the entity.

Finally, where the co-references span two or more existing entities, they are all assigned the same UUID (that is, the existing entries will be updated as well).

The result is a set of pairs of “local” subject URIs (comprised of the configured base URI, followed by the UUID assigned as described above, and a fixed fragment of #id) and “remote” subject URIs from the source data. These pairs are written into either the quad-store as owl:sameAs triples (in the graph with the same name as the configured base URI), or are written into a SQL database table.

Each updated local UUID-derived URI is added to a list for this processing pass which is passed to the subsequent phases described below.

For example, if we begin with no previously-cached data and process a graph which states that A and B are equivalent, then the pair of (A, B) will be added to the co-reference list described at the beginning of this section. Because this co-reference doesn’t refer to any previously-known entities, it’s assigned a newly-generated UUID which we can refer to as U1, and two local-remote co-reference pairs are generated and stored in the quad-store or SQL database:—

  1. (http://baseuri/U1#id, A)
  2. (http://baseuri/U1#id, B)

Next, we process another graph which states that C and D are equivalent, and this results in a new UUID being generated, U2, and two new pairs being generated and stored:—

  1. (http://baseuri/U2#id, C)
  2. (http://baseuri/U2#id, D)

If we then process a graph which states that A and D or equivalent (possibly as well as other subjects), then one of the references from U1 or U2 will be updated so that all of A, B, C and D are all stored as co-references from a single local entity. For example purposes, we shall say that U1 will be the chosen UUID, although either could occur (the choice is not currently deterministic). Thus, we update the U2 references such that:—

  1. (http://baseuri/U1#id, C)
  2. (http://baseuri/U1#id, D)

This means that if we query our quad-store or SQL database for co-references for U1, the following pairs will be returned:—

  1. (http://baseuri/U1#id, A)
  2. (http://baseuri/U1#id, B)
  3. (http://baseuri/U1#id, C)
  4. (http://baseuri/U1#id, D)

Proxy generation

For each updated local URI, a proxy is generated: that is an RDF entity which is distilled using the the rule-base from all of the source data it is co-referenced with.

The rule-base consists of two sets of rules for proxy generation, which are processed in turn. First, the class of the proxy is determined, by finding all of the classes of all of the co-referenced entities and ordering them according to the score value in rule-base.

Next, a similar process is applied to properties across the source data. The property rules include a similar scoring approach to that used by the classes (so that, for example, skos:prefLabel takes precedence over rdfs:label), as well as discrimination by data type (longitude and latitude should not be an xsd:dateTime, for example) and class applicability (that is, some properties are ignored unless the class determined in the previous step is a particular value: e.g., gn:parentFeature would be ignored for a foaf:Person).

All of this “conveyed” data has a UUID-derived local subject URI, as described above, and is placed in a named graph whose URI is in the form http://baseURI/UUID.


If Spindle is configured to use a SQL database as an index, then certain elements of the generated entity are stored in database tables for later query and retrieval. These include:—

  • the combined list of RDF classes of the entity;
  • the label and description in any languages they are available;
  • the UUIDs of any entities that have this entity as a topic;
  • if this item is a creative work, then any digital assets which are manifestations of it;
  • if the entity is a place, the longitude and latitude, if known; and
  • if this item is a digital asset, then information about the kind (e.g., moving-image, sound, interactive resource, etc.), type (in the form of a MIME type, such as text/html), and where the asset is known to be conditionally accessible, the URIs of audiences which are able to access it.

Storing pre-composed quads

The RDF graph generated above (see Proxy generation) is written to an S3 or RADOS bucket as N-Quads if the module has been configured to do so. If not, then the graph is written into the quad-store instead.

Where N-Quads are stored in a bucket, the document will also include all of the data about the co-referenced entities from their source graphs as well. This means that the Spindle module for Quilt can rapidly retrieve the majority of the data about an entity with a single authenticated GET request to the bucket.

Spindle module for Quilt

The Spindle module for Quilt is a companion to the corresponding Twine module described above, and includes several capabilites not present in the simple resource-graph module that is included with Quilt itself:—

  • the ability to perform URI look-up queries (i.e., locate the entry within the index which is coreferenced to the specified URI and redirect to it);
  • the ability to retrieve data about the co-referenced entities from their original graphs;
  • when configured with an S3 or RADOS bucket, the module can avoid SPARQL for simple fetches and instead perform a simple fetch from the bucket; and
  • when configured to use a SQL database, the ability to efficiently perform complex queries, such as “locate all places like ‘france’ which have related video that is available to users of a particular service”.

When configured with both a SQL database and an S3 or RADOS bucket, the module does not perform any SPARQL queries at all, although this may change in the future as graph databases evolve.

The module processes four kinds of request, which are described in more detail in the section The Research & Education Space API: the index and how it’s structured:—

  • a “root resource” request, which generates data about the different class partitions and the available query capabilities;
  • an item request, which retrieves data about the item using its “local” URI, as well as data about related entities and media;
  • a look-up request, which accepts a subject URI and responds either with a 303 See other redirect response, or a 404 Not found (if the URI is not present in the index); and
  • an index query request, which can be a query across any combination of RDF class, free-form text, related media kind, related media MIME type or audience.

Because the API provided by the module is read-only, the cluster of Quilt instances can be scaled up and down to meet demand as required with minimal co-ordination (subject to underlying database scalability).

Appendix I: Tools and resources


Tools for consuming Linked Open Data

Tools for processing RDF and publishing Linked Open Data

Technical standards



The software stack which powers the Research & Education Space.


The Acropolis web crawler, which is used in the Research & Education Space to locate Linked Open Data.

audience (conditional access)

Within the Research & Education Space, in the context of conditional access to resources, the term audience refers to a specific group of people who are assigned an identifier in the form of a URI. This allows any data about resources accessible to that group to use the same identifier to refer to them, and for the Research & Education Space index to allow queries for digital assets which can be accessed by them.

Content negotiation

The mechanism by which an HTTP user-agent specifies the list of formats it is able to interpret, and a server selects the format it will return a document in based upon that information (in principle, it selects the highest-scored format from the intersection of client-supported and server-supported formats).


A piece of data which states that two identifiers (in the form of URIs within the context of the Research & Education Space) refer to the same entity.

creative work

The abstract form of the output of a creative process. Creative works can have manifestations in several forms, some of which may be digital assets. For example, a book can be a creative work, with both printed and EPUB editions being (physical and digital) manifestations of it.

digital asset

Any sort of document or media, including machine-readable data, which can be represented digitally. This includes RDF/XML documents, PDFs, MP3 audio, PowerPoint presentations, and so on.


Something which is described by some data. Often termed a resource in the RDF specifications, but this can be confusing because “resource” is often used to refer to a document which can be transferred electronically (particularly via HTTP).

Linked Data

Data which is published on the web using resolveable URIs as identifiers, so that de-referencing the URI of something retrieves the data about it.

Linked Open Data

Linked Data which is also openly licensed.

manifestation (creative work)

A particular physical or digital version of a creative work. For example, a PDF can be a digital asset which is a manifestation of a book, which is a creative work.

Not all digital assets are manifestations of creative works: some digital assets are representations of data produced by some automated process, for example.


A Linked Data server which is part of the Acropolis stack. Used within the Research & Education Space to serve its API.


A concrete representation of a document (typically RDF) in some format.


The aggregator which forms part of the Acropolis stack, comprising plug-in modules for Twine and Quilt. Responsible for generating and serving the Research & Education Space index.


An RDF processing engine which is part of the Acropolis stack, and which is used in the Research & Education Space to process Linked Open Data that has been fetched by Anansi.

Codec & container format reference

Video codecs

PreservationLong-term archive storageLossless compression, typically 2:1DNG sequence, Motion JPEG 2000 lossless, VC2 (Dirac) lossless
Intermediate (mezzanine)Fine-cut editingVisually lossless, typically 4:1–6:1VC2 (Dirac), VC3 (DNx), Apple ProRes
DeliveryDistribution through a broadcast chain or publishing on physical mediaOutput format, constrained by bandwidth, typically 10:1–40:1H.262 (MPEG-2 Part 2), H.264 (MPEG-4 Part 10, AVC)
BrowseLightweight, streamable, viewing proxyOutput format, constrained by bandwidth, typically in excess of 50:1H.262 (MPEG-2 Part 2), H.264 (MPEG-4 Part 10, AVC), WebM (VP8+), Theora (VP3+), VP6
SMPTE VC-2 (Dirac)VideoSMPTE/BBCBoth8, 10, 124:2:0, 4:2:2, 4:4:4Currently limited support
SMPTE VC-3 (DNx)VideoSMTPE/AvidLossy8, 103:1:1, 4:2:2, 4:4:4Max 1080i59.94
H.262 (MPEG-2 Part 2)VideoISO/MPEGLossy84:2:0, 4:2:2, 4:4:4Considered legacy
H.264 (MPEG-4 Part 10, AVC)VideoISO/MPEGLossy8, 104:2:0, 4:2:2, 4:4:4Widely supported
Apple ProResVideoAppleLossy10, 124:2:2, 4:4:4Proprietary intermediate codec
Apple Intermediate CodecVideoAppleLossy8, 104:2:0Considered legacy
Ogg Theora/VP3VideoXiphLossy84:2:0, 4:2:2, 4:4:4
VP6VideoGoogle/AdobeLossy84:2:0Classic Flash video codec
WebM/VP8+VideoGoogle84:2:0Limited support
Motion JPEG 2000VideoISO/JPEGBoth8, 10VariousParticularly suited to preservation

Audio codecs

PreservationLong-term archive storageLossless compression, typically 2:1Raw PCM, FLAC, ALAC, Dolby TrueHD
Intermediate (mezzanine)Fine-cut editingAudibly lossless, typically 4:1–6:1Raw PCM, FLAC, ALAC, AAC (MPEG-2 Part 7, MPEG-4 Part 3), Dolby TrueHD
DeliveryDistribution through a broadcast chain or publishing on physical mediaOutput format, constrained by bandwidth, typically 7:1AAC (MPEG-2 Part 7, MPEG-4 Part 3), MP3 (MPEG-1 Part 3, MPEG-2 Part 3), Dolby AC-3, Dolby TrueHD
BrowseLightweight, streamable, proxyOutput format, constrained by bandwidth, typically in excess of 11:1AAC (MPEG-2 Part 7, MPEG-4 Part 3), MP3 (MPEG-1 Part 3, MPEG-2 Part 3), Dolby AC-3
Raw PCMAudioVariousUncompressedTypically wrapped in AIFF or RIFF (WAV)
FLACAudioXiphLosslessLimited hardware support
Apple Lossless (ALAC)AudioAppleLosslessLimited support
Dolby TrueHDAudioDolbyLossless
Dolby AC-3AudioDolbyLossyWidely supported in professional applications
AAC (MPEG-2 Part 7, MPEG-4 Part 3)AudioISO/MPEGLossyWidely supported
MP3 (MPEG-1 Part 3, MPEG-2 Part 3)AudioISO/MPEGLossyVery widely supported
Ogg VorbisAudioXiphLossyAdopted as audio codec for WebM
OpusAudioIETFLossyCurrently being trialled, particularly by radio broadcasters

Image codecs

PreservationLong-term archive storage, editing & compositionLossless compression, typically 2:1Adobe DNG (RAW), JPEG 2000 (ISO/IEC 15444) lossless, TIFF, PNG
DeliveryDistribution through a broadcast chain or publishing on physical mediaOutput format, constrained by bandwidth, typically 10:1-40:1JPEG 2000 (ISO/IEC 15444) lossless, TIFF, PNG
BrowseLightweight viewing proxy/thumbnailOutput format, constrained by bandwidth, typically in excess of 30:1JPEG (ISO/IEC 10918), JPEG 2000 (ISO/IEC 15444) lossless, PNG
CodecKindAuthorityLossy/losslessDepth (BPC)ChromaNotes
Adobe DNGRAW imageAdobeLosslessArbitraryDerived from TIFF
DPXProcessed imageSMPTELossless8-64 log
TIFFISO/AdobeBothArbitrary4:4:4, 4:2:0Supports HDR, alpha
OpenEXRProcessed imageDisney-PixarBoth16Supports HDR
JPEG 2000 (ISO/IEC 15444)Processed imageISO/JPEGBoth8, 10VariousSupports sequences with Motion JPEG 2000
JPEG (ISO/IEC 10918)Processed imageISO/JPEGLossy84:2:0
PNG (ISO/IEC 15948)Processed imageW3CLossless8bpp, 8bpcSupports alpha
WebPProcessed imageGoogleBoth84:2:0Derived from WebM/VP8+

Container formats

ContainerAuthoritySeekable?Multiple tracks?Multiple programs?MIME typeNotes
Transport Stream (MPEG-2 Part 1)ISO/MPEGNoYesYesvideo/MP2TUsed by DVB, ATSC, ARIB, Apple HLS, modified for use by Blu-Ray and AVCHD
Program Stream (MPEG-2 Part 1)ISO/MPEGYesYesNovideo/MP2PUsed by DVD-Video (VOB), HD-DVD (EVO)
QuickTimeAppleYesYesNovideo/quicktimeNow harmonised with and extends Base Media
Base Media (MPEG-4 Part 12)ISO/MPEGYesYesNoVariousDerived from QuickTime .mov
MP4 (MPEG-4 Part 14)ISO/MPEGYesYesNovideo/mp4, audio/mp4Derived from Base Media
FLVAdobeYesYesNovideo/x-flvDerived from Base Media
3GP & 3G23GPPYesYesNovideo/3gppDerived from Base Media
AVCHD/Blu-Ray MTS/TODVariousYesYesNovideo/MP2TTransport Stream packets prefixed with a 32-bit timecode
Elementary Stream (ES)ISO/MPEGNoNoNoVariousRaw codec data
Packetized Elementary Stream (PES)ISO/MPEGYesNoNoNone (application/octet-stream)Elementary Stream split into packets with an added header
MXFSMPTEYesYesNoapplication/mxfForms the basis of the Digital Production Partnership (DPP) UK broadcasting delivery specification
AIFFAppleYesNoNoaudio/x-aiff, audio/aiffTypically used as a lightweight single-essence container
AAFAMWAYesYesNoNone (application/octet-stream)Derived from Microsoft (OLE) Structured Storage as used by legacy Microsoft Office
MatroskaMatroskaYesYesNoaudio/x-matroska, video/x-matroskaNot well-supported
JP2 (ISO 15444-12)ISO/JPEGNoNoimage/jp2, image/jpx, image/jpm, video/mj2Derived from Base Media; profiled for JPEG 2000 (and Motion JPEG 2000) essence
WebMGoogleYesYesNoaudio/webm, video/webmDerived from Matroska; only used to carry WebM audio & video essence
RIFFMicrosoftYesYesNoVarious (particularly audio/vnd.wave, audio/wav, audio/wave, audio/x-wav, video/x-msvideo)WAV and AVI are both RIFF formats
ASFMicrosoftYesYesNoaudio/x-ms-wma, video/x-ms-wmvConsidered legacy; WMA and WMV are both ASF formats
OggXiphYesYesNoaudio/ogg, video/oggDe facto container for Vorbis audio and Theora video

Metadata formats

ContainerAuthorityExtensibilityStandalone?Embedded inNotes
ExifUnmaintainedControlledNoJPEG, TIFF, JPEG 2000, PNGLargely superseded by XMP; contains IPTC IIM
Adobe XMPAdobeArbitrary (URIs)YesTIFF, JPEG 2000, PDFXMP is a subset of RDF/XML; widely-used
ID3v2VariousConsensusNoMP3, AIFF, MP4Considered legacy, but widely-used
MP4ISO/MPEGFourCC registryNoBase Media and derivatives
MPEG-7ISO/MPEGControlledYesBase MediaXML-based; describes relationships between components
MPEG-21ISO/MPEGControlledYesBase MediaIncludes rights expression
TV-AnytimeUnmaintainedControlledYesBase MediaConsidered legacy but used in broadcast applications
Turtle (RDF)W3CArbitrary (URIs)YesNot currently widely-used as a media metadata container; can be generated from RDF/XML
RDF/XMLW3CArbitrary (URIs)YesGenerally considered legacy, superseded by Turtle; basis of Adobe XMP

Packaging formats

PackageAuthorityMetadata formatsContainer formatsMultiple programs?Notes
AVCHDSony/PanasonicMTS/TODYesDerived from Blu-Ray
DVD-VideoDVD ForumProgram Stream (MPEG-2 Part 1)Yes
CinemaDNGAdobeXMPMXF, DNGNoIntended to package losslessly-encoded media
Digital Production Partnership (DPP)DPPDPP XMLMXFNoIntended for delivery of complete programmes to broadcasters

Streaming formats

FormatAuthorityManifest formatContainer formatsNotes
IIS Smooth StreamingMicrosoftXMLMTS/TODHTTP-based adaptive streaming for Silverlight clients
RTMPAdobeProtocol exchangeAdaptive streaming for Adobe Flash; considered legacy but remains widely-used, often alongside HLS
Apple HLSApple/IETFExtended playlist (m3u8)Transport Stream (MPEG-2 Part 1)Particularly well-supported on mobile devices
Adobe HDSAdobeXMLFLVConsidered legacy; Adobe is transitioning to HLS for streaming media

License index

The Research & Education Space crawler discards RDF data which is not explicitly licensed using one of the well-known licenses listed below. Note that the URI listed here is the URI which must be used as the object in the licensing statement, as described in Incorporating rights information into RDF data.

Creative Commons Public Domain (CC0)
Library of Congress Public Domain
Creative Commons Attribution 4.0 International (CC BY 4.0)
Open Government Licence,,,,,,,,
Digital Public Space Licence, version 1.0
Creative Commons 1.0 Generic (CC BY 1.0)
Creative Commons 2.5 Generic (CC BY 2.5)
Creative Commons 3.0 Unported (CC BY 3.0)
Creative Commons 3.0 US (CC BY 3.0 US)

Vocabulary index

VocabularyNamespace URIPrefixSection
Access Control ontology conditionally-accessible resources
Bibliographic Ontology creative works
Basic geo vocabulary places
Creative Commons Rights Expression Language describing rights and licensing
CIDOC CRM physical things
DCMI Metadata Terms metadata, Metadata describing rights and licensing, Collections and data-sets
DCMI Types describing documents, Collections and data-sets
Event ontology events
FOAF metadata, Describing digital assets
FRBR Core creative works
GeoNames Ontology places
Media RSS digital media, Describing digital assets
Media types digital assets
ODRL 2.0 describing rights and licensing
OpenSearch RES API: the index and how it’s structured
OWL RES API: the index and how it’s structured, Referencing alternative identifiers: expressing equivalence
Programmes Ontology creative works
RDF schema RES API: the index and how it’s structured, Common metadata
RDF syntax RES API: the index and how it’s structured, Common metadata
SKOS concepts and taxonomies
VoID RES API: the index and how it’s structured, Collections and data-sets
W3C formats registry RES API: the index and how it’s structured, Metadata describing documents
XHTML Vocabulary RES API: the index and how it’s structured

Class index

The following RDF classes are applied to entries in the RES index by the aggregator, based upon the class they are evaluated as belonging to:—

foaf:AgentAgents (i.e., things operating on behalf of people or groups).Describing people, projects and organisations
dcmitype:CollectionCollectionsCollections and data-sets
skos:ConceptConceptsDescribing concepts and taxonomies
frbr:WorkCreative worksDescribing creative works
void:DatasetDatasetsCollections and data-sets
foaf:DocumentDigital assetsDescribing digital assets
event:EventEvents (time-spans)Describing events
foaf:OrganizationOrganizationsDescribing people, projects and organisations
foaf:PersonPeopleDescribing people, projects and organisations
crm:E18_Physical_ThingPhysical thingsDescribing physical things
geo:SpatialThingPlaces (locations)Describing places

Predicate index

This section lists the predicates which are specifically recognised by the RES aggregation engine, whether they are cached (against the original subject URI from the data in which they appear), and whether they can relayed in the composite entity generated by the aggregator.

PredicateEntity kindCached?Relayed?
rdf:typeAnyYesYes, but also mapped to pre-defined classes
foaf:givenName and foaf:familyNamePeopleYesYes, as rdfs:label
foaf:nameAgentsYesYes, as rdfs:label
gn:namePlacesYesYes, as rdfs:label
gn:alternateNamePlacesYesYes, as rdfs:label
dct:title, dc:title, foaf:name, skos:prefLabelAnyYesYes, as rdfs:label
crm:P138i_has_representationAnyYesYes, as foaf:depiction
dct:subjectCreative works, collections, digital assetsYesYes
dct:rights, dct:license, cc:licenseAnyYesNo