I need to create an alignment between the AFS Ethnographic Thesaurus and LCSH.
Tag Archives: in_Obsidian
language development platform service
https://freecontent.manning.com/adding-latex-rendering-to-our-website-part-1/
https://pkgw.github.io/webtex/install/
https://about.gitea.com/
https://docs.gitlab.com/ee/integration/
- Reqire safe and legal documentation before enabling cloning. Target language communities and back OLAC in.
Needs a Zenodo connection and TeX rendering
Latex.js.
https://docs.gitea.com/next/installation/comparison
should compare with gitlab
UNTL Translator A metadata utility
It looks like UNT might never adopt OLAC metadata Or even QDC. So is it possible to "upscale" these values via a utility?
https://library.unt.edu/metadata/fields/title.html
https://digital2.library.unt.edu/vocabularies/formats/
https://digital.library.unt.edu/help/faq/programmatic-access/
https://digital.library.unt.edu/oai/
https://digital.library.unt.edu/oai/?verb=ListMetadataFormats
https://github.com/HughP/UNT-INFO-5223/blob/main/DCTerms/DCTermsRecordTemplate-hp3-record2.xml
https://www.academia.edu/47979997/The_UNTL_Metadata_Guidelines_Version_2_2006_
Metadata Utilities
I wonder if I can craft a CIDOC translator to OLAC.
https://www.emerald.com/insight/content/doi/10.1108/GKMC-06-2022-0133/full/html
https://en.wikipedia.org/wiki/CIDOC_Conceptual_Reference_Model
https://www.cidoc-crm.org/collaborations
https://www.cidoc-crm.org/crmtex/sites/default/files/CRMtex_v2.0_June_2023.pdf
Re-implementing the OLAC validator
The OLAC validator runs off of an unit of software which has the heartbleed security vulnerability. Thinking about implementing a validator the following software comes to mind. https://github.com/zimeon/oaipmh-validator There was also an Online OAI-PMH validator from a former engineer on the Europeana project. I think he is based in Greece. His solution is not open source, but he mentioned that he would consider adding the OLAC profile. https://validator.oaipmh.com/
It would be good to see what other OAI-PMH validators look like and how submitters expect to interact with them.
https://validador.rcaap.pt/validator2/?locale=en
http://oval.base-search.net/
https://doi.org/10.17700/jai.2016.7.1.277
https://rdamsc.bath.ac.uk/msc/t64; https://www.openaire.eu/validator-registration-guide ; https://github.com/EuroCRIS/openaire-cris-validator; https://www.fosteropenscience.eu/content/openaire-compatibility-validator-presentation
http://oai.clarin-pl.eu/
VRA Core and its use of xml:lang
Some information professionals might be confused about the use of language identification metadata in larger bibliographic metadata standards. For example, VRA Core (Visual Resources Association)is a metadata standard which is used to describe visual artifacts. It is implemented in XML and therefore takes on all the descriptive power of XML. Including the use of the xml:lang attribute.
The following observations are made using the VRA (Visual Resources Association) Core 4 XML Schema, version 0.42. This schema implements the final VRA Core 4.0 guidelines, 2007-04-09. It is important to note that in these metadata standards implemented by memory institutions there are really two parts, the first is the "guidelines" and then there is the "implementation" of those guidelines (in this case as an XSD validation file). These two documents may not always be congruent even if that is the intention. In these cases I argue that what is valid is the technical implementation over the guidelines as that seems to be the best way to argue the definitive authority.
The XSD validation document contains the following annotation around the use of the xml:lang attribute.
VRA Core metadata attributes which can be applied to virtually any element. Note that xml:lang should contain ISO 639 language codes, not the English names of languages. Although the XML Schema defines xml:lang as allowing ISO 639-2 (three-letter) codes, some validators will only accept ISO 639-1 (two-letter) codes.
This annotation is misleading. First, the VRA Core authors are trying to alert catalogers and technologists that they need to not use the full text name value as might be done in other "library oriented standards", but rather they need to use language codes. In general this is a good thing. However, the VRA authors fail to understand the XML specification. Specifically, they indicate the need to use ISO 639 language codes. This is not true. XML needs to use BCP-47 language codes. This can be found in the specification for XML 1.0 fifth edition §2.12 https://www.w3.org/TR/xml/#sec-lang-tag. It is true that BCP-47 currently calls for the use of ISO 639 codes, but this might not always be true.
A second issue with the annotation is how the annotation distinguishes use between ISO 639-2 and ISO 639-1. If there are VRA Core data consumers or producers who are not consuming or producing valid XML then this is a transmission machinery issue not a protocol issue. BCP-47 does not call for the use of ISO 639-2/3 tas when there is an equivalent ISO 639-1 tag. If data ingest processes have only implemented ingest of ISO 639-1 then they haven't implemented VRA because VRA stands on XML which stands on BCP-47. BCP-47 is an algorithm which calls upon different standards at different times. Understanding the fall back nature of the algorithm would have clarified this point for VRA authors.
The following resources are useful for a better understanding of Language Tags in XML:
DCMIType PhysicalObject
Using DCMIType: how would I classifying a curated garden or Bacteria vs viruses.
Here is my outline for the paper on physical objects
OLAC CVs
It would be great to exemplify Martin Mous's CV in OLAC.
https://www.universiteitleiden.nl/en/staffmembers/maarten-mous/publications#tab-4
Or some of these other CVs
Scott DeLancey https://pages.uoregon.edu/delancey/index.html
Michel Ferlus https://hal.science/hal-04567293/document
NLP Interchange Format
https://pypi.org/project/pynif/
https://www.w3.org/2015/09/bpmlod-reports/nif-based-nlp-webservices/
https://github.com/rankastankovic/TEI2NIF/blob/main/README.md
https://distantreading.github.io/
Curriculum Creation and Blooms Taxonomies
Two really good websites: