I have been really encouraged by the availability of images which have been released under Creative Commons licenses.
While there are a lot of icon sets out there, here are some of my "go to" places.
The first place I usually go for free icons isthenounproject.com. There is a growing community behind the endeavor and their management operations are being taken seriously.
Bush house: the BBC World Service is leaving its home after 71 years Photo: Paul Grover via The Telegraph
There has recently been some discussion on the about the BBC selling its production facilities and moving from the Bush House to somewhere else. The BBC world service has been a major player in radio and oral culture in Great Britain and around the world for 71 years. A lot of history has been reported by the service. And the BBC's records (including its archive) have oral histories of a variety of world events for the last 71 years in a variety of languages (Wikipedia has a brief description of the collections at the BBC.). Continue reading →
This post is a open draft! It might be updated at any time… But was last updated on at .
In this reviewRegardless of the views expressed here in this review, it should be stated that I have high hopes for Webonary’s future. Some of the people working on Webonary are my colleagues so I attempt hedge my review with the understanding that this is not the final state of Webonary. I am excited that easy to use technology, like WordPress is being used, and that minority language groups around the world have the opportunity to use free software like webonary. I will be looking at the WordPress plugin, Webonary and several associated issues. Continue reading →
January 4-5, 2012, I had the opportunity to participate in the LSA's Satellite Workshop for Sociolinguistic Archival Preparation in Portland, Oregon. There were a great many things I learned there. So here are only a few thoughts.
Part of the discussion at the workshop was on how we can make corpora which are collected by Sociolinguists available to the larger Sociolinguistic community. In particular the discussion I am referencing revolved around the standardisation of metadata in the corpora. (In the discussion it was established that are two levels of metadata, "event level" and "corpus level".) While OLAC gives us some standardization about the corpus level metadata, the event metadata is still unique to each investigation, and arguably this is necessary. However, it was also pointed out that not all "event level" metadata need to be encoded or tracked uniquely. That is, data like date of recording, name of participants, location of recording, gender (male/female) of participant, can all be regularized across the community.
With the above as preface, it is important to realize that we do need to understand that there are still various kinds of metadata which need to be collected. In the workshop it was acknowledged that the field of language documentation was about 10 years ahead of this community of sociolinguists.What was not well defined in the workshop was what the distinction is between a language documentation corpus and a sociolinguistics corpus. It seems to me as a new practitioner that the chief difference between these two types of corpora is the self identifying quality of researcher. That is does the researcher self-identify as a Sociolinguist or as a Language Documenter. Both types of corpora attempt to get at the vernacular, and both types of corpora collect sociolinguistic facts. It would seem that both corpora are essentially the same (give or take a few metadata attributes). So, I will take an example from the metadata write-up I did for the Meꞌphaa language documentation project. In that project we collected metadata about:
People
Equipment
Equipment settings during recording
Locations
Recording Environments
Times
Situations
Linguistic Dynamics
Sociolinguistic Attitudes
In the following diagram I illustrate the cross cutting of a corpus with these "kinds" of metadata. The heavier, darker line represents the corpus, while the medium heavy lines represent the "kinds" of metadata. Finally, the lighter lines represent the sub-kinds of metadata, where the sub-kinds might be the latitude, longitude, altitude, datum, country, and place name of the location.
Corpora metadata categories with some sub-categories
This does not mean that the corpus does not also need to be cross cut with these other "sub-kinds". However, these sub-kinds are significantly more in number and will very from project to project. Some of these metadata kinds will be collected in a speaker profile questionnaire. But some of these metadata can only be provided with reflection on the event. To demonstrate the cross cutting of these metadata elements on a corpus I have provided the following diagram. It uses categories which were mentioned in the workshop and is not intended to be comprehensive. In this second diagram, the cross cutting elements might themselves be taxonomies. They may have controlled vocabularies or they may have an open set of possible values, they may also represent a scale.
Taxonomies for social demographics and social dynamics for speakers in corpora
Both of these diagrams tend to illustrate what in this workshop were referred to a "event level" metadata, rather than "corpus level" metadata.
A note on corpus level metadata v.s. descriptive metadata
There is one more thing which I would like to say about "corpus level" metadata. Metadata is often separated out by function. That is what does the metadata allow us to do, or why is the metadata there?
I have been exposed to the following taxonomy of metadata types though course work and in working with photographs and images. These classes of metadata are also similar to those posted by JISC Digital Media as they approach issues with Metadata for digital audio.
Descriptive meta-data: supports discovery, attribution and identification of resources created.
Administrative meta-data: supports management, preservation, and appropriate usage of resources created.
Technical: About the machinery used to create the resource and the technical aspects of the resource.
Use and Rights: Copyright, license and moral ownership of the items.
Structural meta-data: maintains relationships between the parts of complex, multi-part resources (Spanne 2008).
Situational: this is metadata which describes the events around the creation of the work. Asking questions about the social setting, or the precursory events. It follows ideas put forward by Bergqvist (2007).
Use metadata: metadata collected from or about the users themselves (e.g. user annotations, number of people accessing a particular resource)
I think it is only fair to point out to archivist and to librarians that linguists and language documenters do not see a difference between descriptive and non-descriptive metadata in their workflows. That is sometimes we want to search all the corpora by licenses or by a technical attribute. This elevates the these attributes to the function of discovery metadata. It does not remove the function of descriptive metadata from its role in finding things but it does functionally mean that the other metadata is also viable as discovery metadata.
I run a website, wycliffe.me, for redirecting traffic (URL redirector). But I need it to have a CRM sort of component to it. So I added some custom fields to the Posts using Just Custom Fields. (I am using Posts, but I could just as well use a custom post type Custom Post Type UI.) But now I want a summary of some of those fields in a special panel on the back-end. So I have collected some links to read and start hacking.
First I need to create an options page in the admin area: http://buildinternet.com/2010/01/create-custom-option-panels-with-wordpress-2-9/.
Next I need a way to collect the data. So I look for a plugin which can search my database and return fields…. sorta like views for Drupal. And wala there is such a plugin: Query Wrangler. (Query Posts might be another option, but I did not try it.) However, this plugin is not powerful enough. I can not search all the fields created by my other plugins, only my custom fields and content types. More power would be ideal.
I am all for OpenData and Open.NASA. But how does NASA being a government entity relate to how it “licenses” it’s data and software? What I mean is that, shouldn’t the things being “open sourced” be public domain, rather than licensed content? I agree that creating a license which is not widely recognized is not useful, that is the whole point behind Creative Commons. But are there cases where NASA is “over licensing” content that it shouldn’t because it is the content should be released into the public domain? Reference CC Salon in Jan 2011, Time segment 1:05:00 where Joi Ito talks about the issue. http://blip.tv/creative-commons/creative-commons-salon-mountain-view-what-does-it-mean-to-be-open-in-a-data-driven-world-4725230
What prevents, or what reasons are there for not putting NASA’s data and software, which it releases, in the public domain? Is that not more open?
Because I have been on the team doing the SIL.org redesign, I have been looking at the Open Source landscape looking at what is available to connect Drupal with DSpace data stores. We are planning on making DSpace the back-end repository, with another CMS running the presentation and interactive layers. I found a module which parses DSpace's XML feeds in development. However, this is not the only thing that I am looking at. I am also looking at how we might deploy Omeka. Presenting the entire contents of a Digital Language and Culture Archive, and citations for their physical contents is no small task. In addition to past content there is also future content. That is to say archiving is also not devoid of publishing - so there is also the PKP project [sic redundant]. (SIL also currently has a publishing house, whose content need CSV or version control and editorial workflows, which interact with archiving and presentation functions.)
Omeaka
Wally Grotophorst has a really good reflection on Omeaka and DSpace, I am not sure that it is current but it does present the problem space quite well. Tom Scheinfeldt at Omeka also has a nice write up on why Omeka exists, titled "Omeka and It's peers". It is really important to understand Omeka's place in the eco system of content delivery to content consumers by qualified site administrators.
I set up another WordPress site and I wanted to transfer what I had written there to this site, so that all my writings would be together. This would include comments, links and attached media, and metadata about the post.
What I want a transfer plugin to do.
So I looked for a WordPress Plugin to do that. I found two (and as it is when I find more than one I have to test it out and write-up the results.):
Xpost: Cross-post was the first plugin I found and it seemed to have a lot of really nice features.
Transfer: the main difference between the two based on author description is that this one said that it also transferred images attached with the post.
So I tried Transfer first.
Transfer
However, when I installed Transfer, it said that it could not find the Zend Framework… Warning: require_once(Zend/XmlRpc/Client.php) [function.require-once]: failed to open stream: No such file or directory in /home1/public_html/username/wordpress/wp-content/plugins/transfer/library/Aperto/XmlRpc.php on line 3Path values changed to protect the innocent.
The plugin requires that one download Zend Framework Minimal (http://framework.zend.com/download/latest) and put the Zend folder under /wp-content/plugins/transfer/library/
I did this and I would get the WordPress white screen of death. I was told that this white screen of death was because my provider terminated a process (I had maxed out my user’s memory allocation) This white screen happens on one of my installs but not on another under a different user… so, not sure what is going on – Neither WP install would transfer the post. To get around the white screen of death I had to de-activate the plugin by editing the database.
I had initially failed to read the install requirement for Zend, so I found another solution for adding Zend to wordpress.
So I knew I needed to install the Zend Framework, I am sorta surprised that Dreamhost, my hosting provider did not have Zend set up on my server in a way that WordPress was automatically going to detect it. Oh, well is there a plugin for that? – Uh, yes there are like a gilzillion! So I went with the first one: Zend Framework [or also in WP-Extend]. I loaded it and then added the helpful code found in the online WordPress forums.
Go to your wp-config and paste this right after * @package WordPress part and before // ** MySQL settings – You can get this info from your web host ** //
/** Zend Framework **/
function add_include_path ($path)
{
foreach (func_get_args() AS $path)
{
if (!file_exists($path) OR (file_exists($path) && filetype($path) !== 'dir'))
{
trigger_error("Include path '{$path}' not exists", E_USER_WARNING);
continue;
}
After I did both of these things all of my errors went away.
I did try a second plugin, WP-ZFF Zend Framework Full for installing the Zend Framework, this one said that the plugin would modify the include path so I thought that could use this without modifying wp-config.php but the plugin failed on import so I deleted it.
So in the sad case I that I was not able to get Transfer to work I moved on and decided to try Xpost.
Xpost
Xpost [on WP-Extend] was a breeze to set up and I actually got it working for a simple post. However, I was not able to select the target category in the master WP install, from the writer’s WP install (The test post I used just when to the default category).
Xpost not getting categories available on the master WP install.
The box just says categories loading. This seems to be a problem reported by Nigel and by gulliver.
The test image was not transferred to the media library of the master WP install from the writer’s WP install. Additionally, if the category of the post is changed in the Master WP install, then the writer’s WP install loses track of the post. This is only temporary… If the writer tries to cross-post the post again, then the This results in the writer not being able to update the post. (Red error message is shown.) But if the writer tries a second time then the original post in the Master WP install is found, and updated. Including the “removed” category. However, this “removed” category was intentionally moved by the editor on the Master WP install. So this creates a bit of a conflict. BTW: It would be nice to be able to select a special custom post type for imports.
It seems that Xpost was designed to broadcast out rather that to ingest.
I use MAMP for my local test environment. But I have recently moved beyond just PHP apps. I am also looking at using Tomcat. I would like to mess around with DSpace locally and use Solr also. But I have found a couple of helps for adding things to MAMP.
Drush: I also want Drush for working with Drupal. But this does not need to live in the MAMP folder. I just don’t know where else is safe. (I should have more on Drush later.)
One of the problems I am facing is that I really like apps like MacPorts. But I do not want to tinker with the CORE and default setting of my OS X machine. So I find that MAMP is a good alternative, but I can not type a command in the command line and have all the dependencies download automatically. I recently found that I could do something like this with Homebrew…. Never used it before but it looks to be the tool for the job. So I have collected a few tutorials like: installing php5.3, Using an gmail as a smtp server, and setting up solr.
Jetpack is in no way new… But I have never installed it (it seems that half a million other people have though). The only service I have used from Automattic is akismet. Then about a month ago I installed after the dead line as a Google Chrome plugin to help me with my spelling mistakes. It seemed to work so I thought I would give it a go as a WordPress plugin.
What was new was that I had not integrated a sharing solution for readers of my blog. So as of now there is a share this option at the end of my posts.
Sharing options
Of course Sharedaddy, the sharing plugin did not have a Google +1 sharing option, nor del.ic.ious sharing option. So I had to find some solutions. I found a fork of Sharedaddy on github which had added Google+ and LinkedIn. (I am not on Google+ but I just joined LinkedIn last week as I was redoing my resume).
To add delicious I followed a post by Ryan Markel to find the right share service URLs.
Menus
The other thing I figured out this week was how to use the Menus Feature under the Appearance tab. I have been using K2 since 2005 and have always thought that the menus in the default theme were sufficient. I have usually not had complex menu desires. So there was no real need to learn these new features, however. Now I wanted to put several picture pages under the same menu. So wal-la. It is done now.
New menu settings
Others
(Mostly RDFa and HTML5)
I also have a plugin that is adding Open Graph RDFa tags to my theme. My current version of K2 is HTML5 but, it is not validating with the RDFa tags in it. So I was trying to validate them but have not been successful. I looked at this answer which said to add something to the doctype. But then there is more answers too. Sometimes these answers are beyond me. I which I had some structured learning in this subject area.
Why RDF?
And RDFa is the basis of Open Graph, the technology used to sync FaceBook Likes between my site and FaceBook.