VIVO 2016 Conference
 

Join us for the Wednesday Workshops

Morning Workshops (8:30 AM - 12:00 PM)

Introduction to VIVO: Planning, Policy, and Implementation

Presenters: Paul Albert, Brian Lowe, Andi Ogier, Michaeleen Trimarchi, Julia Trimmer and Alex Viggio

New to VIVO as a new team member or part of a new implementation? What is VIVO all about? How did VIVO evolve and what benefits does it offer to researchers, to institutions, and to the global community?

This workshop provides an institutional perspective from others who have worked with VIVO. You’ll meet six VIVO community members with years of experience with VIVO at different institutions. The presenters will talk about how VIVO is used in their organizations, where the data come from, how VIVO is managed, and how to feed downstream systems. You’ll learn how to find the right resources as a new VIVO implementation — data sources, team members, governance models, and support structures. This workshop brings best practices and “lessons learned” from mature VIVO projects to new implementations. We’ll help you craft your messages to different stakeholders, so you’ll leave this workshop knowing how to talk about VIVO to everyone from your provost to faculty members to web developers.

Getting Data from Your VIVO: An Introduction to SPARQL

Presenter: Michael Conlon

One of VIVO’s greatest strengths is its ability to provide all its data for reuse. This workshop will introduce the attendees to SPARQL (SPARQL Protocol and RDF Query Language), the W3C standard for querying RDF data. VIVO comes with SPARQL ready for use. SPARQL will be introduced as well as the basic concepts needed to get data from VIVO, such as Resource Description Framework (RDF), URI (Uniform Resource Identifier) and the VIVO ontologies. Each will be explained in simple terms and reinforced by example.

Attendees will work in groups, and the groups will work through simple examples, to moderately complex examples for SPARQL queries for real-world examples of using VIVO data. Attendees will learn how to export data returned by SPARQL queries to spreadsheets for subsequent data analysis, tabulation or visualization. This is an introductory workshop, no prior knowledge of SPARQL, RDF, or the VIVO ontologies is needed to participate. Following the workshop, attendees will be able to read VIVO ontology diagrams, and use these diagrams to write and run SPARQL queries on their VIVOs.

Getting Started with VIVO 1.9: Introduction to the Maven Build Process

Presenter: Graham Triggs

Following the 1.8.1 release of VIVO, the build environment was migrated from the existing Ant scripts, to using the Maven project descriptors. This brings a number of benefits: it’s a more immediately-familiar environment for many Java developers. Modern IDEs can read the project description and immediately set up the environment automatically, and we’re able to better declare and manage the dependencies. However, in order to make the project look and feel familiar to a new user approaching the project from a Maven perspective, with an expectation of a standard project layout, the structure of the Vitro/VIVO projects needs to change slightly.

This half-day workshop will help people to understand how the project structure has changed in VIVO 1.9, and to show them how to adapt their existing codebases when upgrading. It will also provide an introduction to the Maven project layout for new users, and show them how they can make effective use of Maven when creating a new VIVO implementation.

Afternoon Workshops (1:30 PM - 5:00 PM)

Data Integration with Karma

Presenters: Pedro Szekely and Violeta Ilik 

The VIVO platform has been designed to lower barriers for data interchange and re-use by standard data formats, ontologies, and identifiers consistent with Semantic Web best practices. The workshop will introduce the basic functionalities of Karma data integration tool and provide attendees with hands on training. Attendees will learn how to provide ontologies to Karma, how to load data, how to define URIs, how to transform data using Python scripts, how to map the data to the ontology, how to save, reuse and share mapping files, and how to produce RDF and JSON. No prior knowledge of semantic technologies will be assumed.

Implementing a researcher profile system involves aggregating data from a variety of sources. Data needs to be mapped, cleaned and maintained. Participants will utilize the presented tools to:

  • Model data in a variety of formats with the help of established ontologies (FOAF, FABIO, CITO, BIBO, SKOS, VIVO-ISF)
  • Understand the use of Web Ontology Language (OWL)
  • Create RDF data for use in ontology driven applications

We will begin with lectures and continue with hands-on demonstration and experimentation. The lectures are designed to help participants gain experience and knowledge of researcher profiling systems, importance of ontologies, a language for expressing customized mappings from relational databases to RDF datasets (R2RML), the advantage Karma data integration tool offers in transforming data into semantic web compliant data.

The workshop will help participants in planning for an organization’s efforts to move existing data into ontology driven applications, like VIVO, to uniquely represent the scholarly outputs of researchers in their institutions and beyond.

Linked Data Fragments: Hands-on Publishing and Querying

Presenter: Ruben Verborgh

Linked Data on the Web—how can we use it, and how can we publish it?

This workshop explores different interfaces to Linked Data using the Linked Data Fragments conceptual framework. There are two main aims of this session: learning to consume existing Linked Data from the Web, and publishing your own dataset using a low-cost interface. Additionally, we will build small applications in the browser that make use of Linked Data.

This session is aimed at participants with a technical background, as we will get into the details of Linked Data publication and querying. However, people with a broader interest are also welcome, as participants can work together in groups. If you want to learn what roles Linked Data can play in your organization on a very practical level, this workshop is definitely for you.

If there is interest, this workshop can be extended with a hackathon later, in which people can build prototype applications on top of live Linked Data on the Web.

How to Make Your Research Networking System (RNS) Invaluable to Your Institution

Presenter: Brian Turner, Anirvan Chattejee and Eric Meeks

This workshop is designed to help institutions build, leverage, and deploy the information within their RNS across the institution. The goal is to increase awareness of, engagement with and dependence on your RNS to solidify the RNS’ roles in supporting researchers. Note that the takeaways from this workshop can be applied to your RNS regardless of the underlying product, and will work for a VIVO, Profiles, “home grown,” or commercial RNS installation.

This workshop will provide a mix of lecture, discussion, exercises, and templates to enable participants to replicate the successful engagement at UCSF — and avoid our biggest mistakes. Use of this approach has garnered 1.3 million visits a year, 2,800 customized profiles, and 38 outbound data reuse integrations for UCSF Profiles.

This workshop will discuss ways to make your RNS indispensable in each phase of the implementation:

Setting the Stage
  • Auto-added data: add as much data as possible to your RNS - publications, grants, photos, news stories, etc.
  • Google Analytics - the crucial substrate
  • Pre-packaging of publications, people and connections
  • APIs to make the data available
  • Targeting and messaging decision makers appropriately

 Dress Rehearsal

  • User support - answer the emails!
  • Senior leadership support
  • Deploy APIs to power other websites
  • Rearch engine optimization

 Opening Night & The Season

  • Engagement email campaigns
  • Bootcamps and department meetings
  • Finding and catering to the “power users”
  • Iterative process of growing user base/traffic and providing more data/features

 After Party

  • Wrapping it all up to show the value of your RNS in an executive level report for the higher administration