A presentation at Hydra Connect 2016 described thus and A follow-up to our presentation at Hydra Virtual Connect to show the progress we've made on Opaquenamespace.org. We'll discuss how we are using Git and github as our master-copy for RDF graphs, and using Blazegraph and the triplestore-adapter gem for our operational datastore. An audio recording of the session is available for download below.
Keyword:
Resource Description Framework (RDF), Connect 2016, and Hydra
Subject:
Hydra Project
Creator:
Wick, Ryan, Gum, Josh, and Sato, Linda
Contributor:
University of Oregon Libraries and Oregon State University Libraries and Press
This will be a half-day, hands-on workshop covering data modeling primarily in RDF. We hope to bring a diverse group of Hydra community members together to learn, discuss, and build out examples that will inform Hydra community best practices for data modeling. This modeling work will be taught in the context of helping Hydra and Fedora development, metadata, and interoperability efforts. We will discuss how model uses a number of standards, and demo the different ways to represent models. We will compare and contract data modeling with metadata standards/profiles. We will walk through modeling efforts around PCDM and its place in our work and community - this workshop will not focus on PCDM alone (this is not a PCDM or RDF workshop). We want this workshop to bring together, develop and engage a larger corps of data modelers in the Hydrasphere. and A workshop delivered at Hydra Connect 2016, described thus
Keyword:
Resource Description Framework (RDF), Connect 2016, Hydra, Portland Common Data Model (PCDM), Metadata, and Workshop
Subject:
Hydra Project
Creator:
Matienzo, Mark, Harlow, Christina, Johnson, Tom, and Hardesty, Juliet L
//wiki.duraspace.org/display/hydra/Applied+Linked+Data+Working+Group, https, A workshop delivered at Hydra Connect 2016, described thus, and This workshop is all about techniques to use linked data within your Hydra based application. For example, autocomplete fields from a controlled vocabulary are nice... but what if you wanted to give more context to what users are selecting via things like alternative labels and broader / narrower concepts? How do you cache triples locally? How do do you publish your own controlled vocabulary for others to use? And what is the best way to make your RDF data harvestable by others? This workshop is based on work done by the Applied Linked Data working group