Author Archives: Paul

Content Modelling and Storytelling

BBC Television Centre in panoramic view – by strollerdos, from Flickr (Creative Commons license)

During the past year or so, the team at bbc.co.uk/programmes have been putting together a resource which allows people to access information about the BBC’s output in a structured way. This has been done using the principles of the semantic web, and of Linking Open Data. I’ll not go into great detail here, other than to explain the basics. When people think of the content of the web today, they think of ‘websites’ and ‘webpages’. These sites and pages have addresses which are unique, so that when you type the address into your browser, you’ll know where to expect to be directed to. On these websites and webpages, people can write about all sorts of things, any number of topics. And these topics can be repeatedly discussed on various different websites and pages.

What’s missing is the links between these sites and pages. Usually, it takes a search via Google (other search engines are available, folks) to get a good example of this. You type in the topic that you’re looking for, and the search engine will return all the pages it can find where the words that form that topic are mentioned. But there’s no single webpage which represents the topic itself. If there was, and if it had its own, unique address, then everyone could, on their own websites, link to that ‘topic’ as a way of saying “This is exactly the thing I’m talking about.”. If several people did this, their websites would be automatically linked together by the fact they share the same topic – rather than by the fact that someone has created a physical link between one page and another. The former makes more sense, and is much more useful in the long run. Indeed, if these ‘topics’ or ‘concepts’ had their own, unique, permanent addresses on the Internet, then the web would become not only a store of ‘pages’, but of ‘concepts’ which happen to have ‘pages’ related to them – not everything on the web would have to be a ‘page’. Only things that were ‘pages’ (say, of a book…) would then be referred to as a page. We could reclaim the word page, people!

Ahem, I think I’m getting slightly off topic. Anyway, the point is, that the good folks at /programmes are taking these ideas on board, and are creating addresses (i.e. URLs) for each programme that the BBC produces. For instance, an address like this: http://www.bbc.co.uk/programmes/b00gfzhq is an address that uniquely identifies that programme. People all over the web can use that address when discussing that particular programme, so everyone will know exactly what they are referring to. In effect, therefore, although if you type in that address into your browser, you are presented with a ‘page’ about that programme, the address itself represents the programme, rather than the page about the programme. (Because I could post one blog entry saying ‘I loved watching this particular programme’, and another saying ‘I hate this particular page about the programme’).
All this is well and good, but it relies on a solid foundation. These foundations have been constructed over a number of years by people who have been thinking about the structure of what the BBC produces, and the way in which it produces and distributes its content. These structures, or models, include thinking about how a programme is organised into brands (e.g. Blackadder), series (e.g. Series 2), episodes (e.g. ‘Bells’), even versions of the episode (e.g. pre- and post-watershed versions), and then broadcasts of a particular version on a particular channel at a particular time.

The modelling around this is by no means complete, and is being refined and improved all the time. However, I would suggest that, for the most part, the modelling around these sorts of things is approaching maturity, in that those structures are fairly well accepted (that’s not to say they won’t change, but I think most people working on these things now agree that there’s areas of the model which are pretty stable). As I have mentioned, I think these structures represent the production and distribution of the BBC’s content. But what about the content itself?

I think what we haven’t yet looked at modelling is the structure of the content – this applies particularly to fiction content, but also, perhaps, to sport. Taking the semantic web ideas into account, if we have unique URLs for each episode, each series, each brand, why not have a URL for each character within a programme, for each event that connects characters, for each place within the programme? Just as if we make brands, series and episodes in effect the building blocks for ourselves and others to create all sorts of semantically interlinked websites on top of (using SPARQL, the semantic web equivalent of SQL queries – querying concepts and the links between them, using the web as a huge datastore), if we were to give characters, events, places their own unique addresses, we could mash them up to create sites (and new content) such as timelines from a particular character’s point of view, follow story arcs etc.

I like to imagine that the ultimate would be something where every character, event and place in the fictional universe of a programme has an address – and then just like taking toy models of those things, we and the audience could make our own stories from them. You want to add your own characters to the mix? Sure, give them an unique address (for instance, in your own webspace), and start linking them to other characters, events etc. One final point to bear in mind as a caution, however. Although essentially all this structure is a good thing, one thing we should be wary of is creating too much structure, and limiting what we can do with it. If we fix things down too much, saying that this is the ‘official’ version of a particular event, or background or character, in too rigid a way, this will limit creativity and stimulate only arguments about who’s right or wrong. We want to give people the building blocks and toys to create new stories, but we don’t want to restrict the stories they tell.

So, that’s the theory. Now for the practice. I’ve made it my mission to explore these ideas, and experiment with making them a reality. Thanks to a great presentation the other day by Yves Raimond, Nicholas Humphrey and Patrick Sinclair, I’m getting to grips with ontologies such as FOAF. I’m going to be using FOAF and the Events Ontology in particular to try and express stories in a semantic way, and see whether we need a new ontology for storytelling, and what we need from it. As I say, it’s going to be very much trial and error. I could sit here with an extremely detailed plan, working out all my structures and linking everything up straight away, or even not getting started until I’ve got a new ontology in place and working just right. But I think it would be more useful to get in there, try things, discuss them and come back with new ideas. In short, I may not do things correctly the first, second or even third time, but it’s a case of experimenting, and seeing what works, how else it could work, and what’s the best way of doing things.

With that out the way, I invite you to join me. Please give me feedback on what I discuss here, and what I construct. It’ll be extremely welcome and will hopefully help us build a greater way of doing things even faster. And it’s already begun – I’ll write a seperate blog post soon on my first forays into the world of Fictional FOAF modelling….

Coldcut @ Electric Proms ’08

As I write this entry, the first fireworks of the year are going off outside. “Bang after bang after bang after bang”, as an accomplished broadcaster once said. On Saturday night I went to the Roundhouse in Camden for my first Electric Proms concert – Coldcut via the Radiophonic Workshop. My brother and I got there early to pick up the tickets, expecting to only be allowed in for the DJ set, rather than the discussion beforehand. Luckily for us, the discussion had been delayed by half an hour, and we managed to get in for that as well.

I won’t bother describing the event in great detail, you can find all the info on the Electric Proms website. Saying that, though, one great thing I’ve just uncovered via that link is a minute by minute record of the gig from Twitter. As an aside, that’s one of the cool things about Twitter, from my experience – conversations between work colleagues that might otherwise go unrecorded, including examples of collaboration and idea-building, are preserved. Equally, live experiences which, unless ‘taped’, will eventually be forgotten, can be preserved in some fashion here – including, crucially, the emotions and feelings of the people experiencing the event. (I wonder – what about supplementing the football ‘minute-by-minute’ feeds on matchdays with Twitter feeds as well as 606 comments?)

The DJ set itself was quite good – it’s sometimes hard to ‘get into’ a gig when it’s material you don’t recognise, hence the best part of the performance was when they re-mixed the Doctor Who theme (it got the best reception from the crowd, and it’s a shame that it wasn’t longer, in this fan-boy’s opinion). It would have been interesting to have included more voice sampling in a similar fashion to ‘Doctorin’ The House’ or ‘More Beats and Pieces’, but I can understand why they wanted to concentrate on Radiophonic Workshop material.

Other than that, the night consisted of a house party in Willesden Green, where the floors consisted of a large bed of autumnal leaves, everyone wore increasingly bizarre hats, and Bjork was on the stereo. Not much more to add, I suppose. Oh, and today I started work as an ‘Information Architect’ at BBC Audio & Music, the experiences from which, I hope will inspire me to write future blog posts.