In 2014 I persuaded my library to build another website. No, it wasn’t a redesign, no new entity from the ground up to replace what we have. This was another website — a second one.

Ours is a unique joint-use academic and public facility. Divide this library’s users into its broadest audiences and there are still plenty to account for: faculty, undergraduate and graduate students (local, distant), alumni, the public – whatever that means.

Chunking the latter into one big patronbase isn’t particularly useful, but the allocation of the homepage’s real-estate constricted our ability to finely tune it. Our incentive to accommodate the academic community crowded out our ability to accommodate the audience who cared about events and entertainment – and this is precisely where our usability studies drew the line. Public cardholders appreciated but asked for more prominent access to new popular materials and programs, students and faculty were pretty clear about what they didn’t want.

A gif flipping between a wireframe of an ideal website and a heatmap where people actually engaged

So, I talked colleagues into spinning-off a new website — different look and feel, tone, even domain — just for public library services, and they weren’t shy about voicing concerns about increased workload involved with doubling-up and maintaining two sets of content, and whether this decision would, for example, obfuscate research services from the public or programming from the faculty. Content locked away in a silo is, after all, locked away in a silo. There’s risk that a graduate student using the academic library website might not see that a favorite author is visiting when that event is only posted for the public.

Right. Big problem, but not one exactly unique to this project. Libraries have been suffering these pain points for years. Assuaging this grief is exactly the selling point for discovery layers. That “library website” that we refer to in the singular is more like an organism of microsites and applications: there is the catalog, a static homepage hosted by the institution or county, maybe a one-off Drupal site on a server the library controls, subject guides, an event management system, a room reservation system, and iPhone app. Silos are a library’s present and future.

The increasing device complexity and capability of the web is and will continue to reinforce silos. As libraries approach their mobile moment, library websites that try to do too much will fail, whereas sites and apps that focus on doing just one thing well will succeed. It’s this sentiment recommending developers consider breaking out functionality among multiple apps, that there is a point when an app can be too feature-rich.

The Kano model can illustrate that some features have a negative impact on customer satisfaction.

The Kano model can illustrate that some features have a negative impact on customer satisfaction.

Everything is designed. Few things are designed well. Brian Reed

Libraries are actually in a good position to benefit from this strategy. So much of the library web presence is already single-purposed that it wouldn’t take much to retrofit. Rather than roll the catalog into the same template as the homepage, it can be embraced as a standalone web app with its own unique purpose-driven user interface. This isn’t about going off-brand, but without the pressure of inheriting a mega-menu from the homepage, the library can be more judicious with the catalog’s design. This makes sense for library services when patrons are task-driven. Time is better spent optimizing for engagement rather than making the sites identical.

Not to mention silos aren’t inherently bad for discovery. Organizing web content in the way that news sites have sports sections is sound. Robots and crawlers have an easier time indexing content when there is a delineated path in which similar content is clustered and interlinked. The machines are cool with it. What makes discovery tricky for humans is that content on one site isn’t usually available on another. If patrons visit the library with a task in mind — “I want to find something to read”, “I need to renew my items”, “I want to see events”, or “I need to write a paper” — then there isn’t much incentive to browse outside of that content silo.

Libraries can’t depend on patrons just happening onto the event calendar after picking through the databases, nor can they depend on cramming everything on, or funneling everyone through, the front page. Getting found is going to get harder. If an institution has the ability and incentive to build an app, stakeholders want that option to be on the table without dramatically impacting workflow.  Libraries will need to be able to grow, adapt, and iterate without having to fuss over content.

A copeing mechanism

I knew a standalone, public-themed, public-toned, public-specific library website would better resonate with, well, the public. If we were better able to fine-tune the content for the audience, patrons would be more likely to engage with library services for a longer time. This allows more opportunity to introduce new services, promote databases, maybe increase circulation.

At the same time, by relieving the pressure from just one homepage, the library can also better serve academic patrons. The opportunity to increase engagement all around won this gamble the stakeholder support it needed, but not if it dramatically strained workflow or blocked any potential content from any user. We needed to change how we approached content so that it was possible to share one item across all platforms, but at the same time prevent the need to micromanage which piece of content appeared where.

In 2009, Daniel Jacobson, Director of Application and Development for NPR wrote a series of posts on Programmable Web about the NPR API beginning with “C.O.P.E.: Create Once, Publish Everywhere.” To meet the content demand for native apps on various platforms, microsites, including the NPR affiliate sites, the team wrote an API, which made it easier to fetch and repurpose media. This today is an important principle for addressing the challenges of a future-friendly web.

npr_architecture_diagram

For most libraries it’s not going to be realistic to control all the content from one system, yet consolidating what’s possible will make it easier to manage over time. With some static pages on the institutional web server with limited control, we began migrating this old content in to a WordPress multisite, with which staff were already familiar.

There were specific types of content we intended to broadcast to the four corners: notifications and schedules, databases, events, reader’s advisory in the form of lists and staff picks, guides, and instructional videos. If the library’s success was determined by usage, turnout, and circulation, on the web that success very much depends on the ability to spotlight this content at the patron’s point of engagement.

A content system as-is doesn’t cut it. Popular content management systems like WordPress and Drupal are wonderful, but to meet libraries’ unique and portable needs these need a little rigging. If an institution hopes to staff-source content and expect everyone to use the system, then tailoring the CMS to the needs and culture of the library is an important step.

Subject specialists were creating guides and videos. Librarians involved with programming (both academic and public) were creating events. Others maintained departmental info, policies, schedules.

To ensure consistent and good content from folks better suited to create it, it is unfair and counterproductive to present a system with too steep a learning curve. I admit to being naive and surprised to see how strange and unfamiliar WordPress could be for those who don’t spend all day in it. De-jargoning the content system is no less important than de-jargoning the content.

Plus, these systems require tweaking to make content sufficiently modular. WordPress’s default editor–title, tags and categories, a featured image, and a blank slate text box–doesn’t fly for a content-type like an event, which requires start and end times, dates, all-day or multi-day options.

Moreover, the blank slate is intimidating.

Rigid content requirements and a blank-slate WYSIWYG don’t scale. When demanding content is detail oriented enough to have instructions, the stock editor can be replaced with smaller custom fields, which like any form element can be required before the post can be published.

Here’s an example: a self-hosted video actually requires multiple video formats to be cross-browser compatible and captions to be accessible. Publishing without captions violates accessibility guidelines, but without being able to ensure that the captions exists it is inevitable that at some point an inaccessible video will go live. Breaking the content editor into smaller, manageable chunks allows for fine control, checks and balances, and has the added opportunity to insert instructions at each step to streamline the process.

Custom content fields in LibraryLearn

A cross-system, cross-department, controlled vocabulary is key. When we first started to think about sharing content between public and academic library websites, we knew that on some level all content would need to be filterable by either terms “public” or “academic.” We’re not going to publish something twice, so the public library website will have to know that it needs “public” content.

This was an addicting train of thought. We could go hog wild if new pages knew what kind of content to curate. What would it take then to create a page called “gardening” and make it a hub for all library content about the topic? It needs to be dynamic so it can stay current without micromanagement. It needs to populate itself with gardening book lists, staff-picks, upcoming “gardening” events, agricultural resources and databases – assuming the content exists. Isn’t this just a subject search for “gardening”?

If a library can assign one or two taxonomies that could be applied to all sorts of disparate content, then the query a site makes for the API could match categories regardless of their content type. The taxonomy has to be controlled and enforced so that it is consistent, and when possible can be built right into the post editor. Using WordPress, custom taxonomies can be tied to multiple types without fuss.

register_taxonomy( 'your-taxonomy',

  // Add any content type her 'your-taxonomy' can be used
  array( 'databases', 'events', 'reviews', 'items', 'video' );

  // other taxonomy options omitted for brevity
  array( /* options */ );

);

I created two taxonomies: “Library Audience,” which lets us filter content for the type of patron–academic, public, teen, children and family, distance student, etc.–and “Subject” lets us filter by subject. The no-red-tape way to create a global “Subject” taxonomy was to just use the subjects that the library’s electronic resources use, a standardized vocabulary overseen by a committee. In our specific case, database subjects actually boil down to a three-letter designation. So while users see “Business,” the slug passed around behind the scenes is “zbu.”

Here is what a query against our eventual API for “business” looked like:

https://example-library.org/api/get_content/?taxonomy=subjects&slug=zbu

Content is then liberated by an API. Content management systems like WordPress or Drupal already have an API of sorts: the RSS feed. Technically, any site can ingest the XML and republish whatever content is made available, but it won’t include things added custom to the CMS. This isn’t an uncommon need, so both WordPress and Drupal have REST APIs – which is a little beyond the scope of this writeup.

These enable the programmatic fetch and push of content from one platform into another.

In LibGuides — an increasingly capable library-specific content management system — our content creators can use HTML5 data attributes as hooks in the template that help determine the audience and type of content to grab. It creates a little empty placeholder, like an ad spot, to be populated by a relevant event (if any), book lists, relevant resources, past or upcoming workshops, and more.

At the time this article was originally written in summer 2014 it looked a little like

<span data-content="event" data-audience="public" data-subject="comics"></span>

in which librarians decided what type of content (e.g., an event) went where on their page. For each placeholder, based off its parameters, a script builds and submits the API query string using jQuery:

$.getJSON( ‘//www.example-library.org/api/get_event/?taxonomy=audienc&slug=public&term=comics’ )
  .success( function( response ) {
  // do something
});

We have since largely traded jQuery for Angular. When there is a placeholder it’s a tad more agnostic

<ng-repeat="ad in ads">
  {{ title }}
  {{ etc }}
</ng-repeat>

but more often than not we just weasel it in there using attributes such as audience and type, which unless other specified will determine the values from the page.

Randomly generated events on a library website

Not random, but library events that make sense on the pages they appear.

Remember that just a few years ago many libraries rushed to create mobile sites but then struggled to maintain two sets of content, and the follow-up responsive web design is a long process involving a lot of stakeholders – many haven’t gotten this far because of the red-tape. The landscape of the web will only get weirder. There are and will continue to be new markets, new corners of the internet where libraries will want to be.

Libraries that can C.O.P.E. will be able to grow, iterate, and evolve. Libraries that can’t, won’t.


Also published on Medium.

Michael Schofield is a service and user-experience designer specializing in libraries and the higher-ed web. He is a co-founding partner of the Library User Experience Co., a developer at Springshare, librarian, and part of the leadership team for the Practical Service Design community.