I am asking around on different mailing lists to gain some insight into the archiving habits of linguists who use lexical databases. I am specifically interested in databases created by tools like FLEx, ToolBox, Lexus, TshwaneLex, etc.
The massive pre-print industry has influenced Zotero to make their a specific category for pre-print. This is a cognitive fallacy which only exacerbates the citation and reference chaos.
Pre-prints are manuscripts.... There are hand-written manuscripts, there are typescript manuscripts and there are computer generated manuscripts... Zotero already has manuscripts as a category... no need to add a new category.
To make matters worse, Zotero imports PDFs when it can find open access versions of them. The problem is that it imports them to the article/publication type when they are pre-prints rather than to the pre-print item type. This make authority version management in Zotero nightmare. Classic case (try importing) : https://doi.org/10.1177/0964663914565848
I am still hopeful that Zotero staff will find a clean and easy way to automatically link pre-prints to their authority version records within Zotero.
Sometimes as a parent one has to encourage their child to do something their child doesn't want to do. That time can today. I had to pull teeth to get Katja to come to the pool with me today. I told her she only needed to swim 3 laps. After much cajoling we got to the car. By the time she got to the pool, she had a kick board and was off. I got a few laps in and she met me at the far wall of the 25 yard lane. She says to me: " I want to swim 12 laps". And so she did. So.. from poolside observer 4 years ago so swim partner today.
This might be a way forward to an OAI-PMH repo: https://github.com/discourse/discourse-sitemap another option is to use a query mechanism in the JSON api to get all threads and treat these threads as resources for description. https://meta.discourse.org/t/discourse-rest-api-documentation/22706
I wonder how many layers a tag-group can have... https://docs.discourse.org/#tag/Tags/operation/updateTagGroup
Subject analysis is very interesting. In a recent investigation into a theory of subject analysis, I was introduced to the concepts of: "about-ness", "is-ness", "of-ness".
Sometimes I wonder if linguists defy standard practices in subject representation, of if they define what a general population holds as a challenge with subject analysis in cataloging.
I harken to the OLAC application profile, which is based on Dublin Core. Dublin Core does not scope the subject element to "about-ness" analysis. UNT curriculum, informed and based (in structure) on Steven J. Miller', Metadata for Digital Collections: A How-To-Do-It Manual. The issue at hand is that for linguists, about-ness is only relevant for Information resources representing analysis. For other kinds of resources such as primary oral texts, or narratives captured via video which are often the object analyzed and discussed in information resources representing analysis, the primary view on subjecthood is through of-ness. As far as I know no-one has discussed audio and of-ness descriptions of audio.
It also makes me wonder if genre is mostly about utility and not about a binding style. To this end then a scholar looking for a phonology corpus, is looking for what—a combination of things—a MIMEType, with a relationship to another MIMEType, with an of-ness of a kind and a subject of "phonology".
By splitting up the concepts of: "about-ness", "is-ness", and "of-ness" it provides analytical space for more articulate descriptions in the dc:description field. But when it comes to language materials, the question is: is language a subject by virtue of "of-ness" or by virtue of "about-ness"? There are several implications here:
The description field ought to be re-thought.
The subject field ought to be re-thought.
Some searches by linguists are likely the concatenation of two or three factors: A relationship between two records, and a subject of a kind and a subject of a different kind.
Variation in accuracy, completeness, or consistency can contribute to lower quality metadata records. Hughes (2006), when looking at OLAC records, rightly points out that coverage (quantity of elements per record) is one way to estimate record quality. However, all three impact end-user perceptions about records and their associated resources.
For OLAC the question is how can it reward data contributors for high quality metadata and also detect low quality metadata while correcting or enhancing low quality?
I wonder if there's a way to limit patent or copyright protection for products that are not made from green materials, where the product could be made from a green material. For example, chopsticks￼.
The chopsticks in the photo are plastic and silicone. These could be made from wood and maybe a metal clip. It would serve the same function and likely have a similar functional product life. I wonder if there were an intellectual property rights protections carve-out if it would discourage the use of materials in product types which do not degrade gracefully. In this way does the law facilitate and reward inventions which complement environmental life-cycles, or does the law facilitate the consumerism which leads to the great pacific garbage patch?
This morning while changing Hugh V’s diaper, I said:” now we have to wipe that pee off so the skin doesn’t hurt later”. Hugh V says: “do we need ahh cream (diaper rash paste)?” To which I replied:”no, we need a little boy who puts his pee in the potty.” To which he replied:” well, I’m definitely a little boy.”
OK I did update on may 29th... it took 10 hours. Now I'm sorting out all the visual theme stuff I lost in the process something about unity vs gnome. Chromium updated fine, brave installs. I had to uninstall vivaldi, and signal Tom get the upgrade to work. Obsidian and slack now work. There is the question on if i should just move up the ladder to 22.04.