I was exploring the internet and I found a really cool plug-in in for WordPress. This plugin lets one define specific sets of plug-ins they want to repeatedly download for deploying websites. This is awesome! WordPress Install Profiles. Work smarter.
Over the last few weeks I have been contemplating how multi-lingual content could work on sil.org. (I have had several helpful conversations to direct my thinking.)
As I understand the situation there is basically three ways which multi-lingual content could work.
First let me say that there is a difference between, multi-lingual content, multi-lingual taxonomies, and multi-lingual menu structures. We are talking about content here, not menu and navigation structures or taxonimies. Facebook has probably presented the best framework to date for utilizing on the power crowds to translate navigation structures. In just under two years they added over 70 languages to Facebook. However, Facebook has had some bumps along the way as DropBox points out in their post talking about their experience in translating their products and services.
- Use a mechanism which shows all the available languages for content and highlights which ones are available to the user. Zotero has an implementation of this on their support forums.
- Basically create a subsite for each language and then only show which pages have content in that language. Wikipedia does this. Wikipedia has a menu on the left side with links to articles with this same title in other languages. Only languages which have an article started in them on that title are shown in the menu.
- Finally, create a cascading structure for each page or content area. So there is a primary language and a secondary language or a tertiary, or a quaternary language etc. based on the browser language of choice with country IP playing a secondary role. If there is no page for the primary language then the next in preference will show. This last option has been preferred by some because if an organization wants to present content to a user, then obviously, it would be in the users’ primary language. But if the content is not available in the primary language then the organization would want to still let the user know that the content exists in another language.
It would also be good to understand the concepts used in Drupal 7 (and Drupal 8) for multi-lingual content. There are several resources which I have found helpful:
- Localized and Multi-Lingual Content in Drupal 7
- Drupal 7’s new multilingual systems (part 4) – Node translation
- Drupal 7’s new multilingual systems compilation
- Drupal 8 Multilingual Initiative
It would appear that from this list of resources that Drupal’s default behavior is more in line with part two of the three examples given above.
SEO for standard websites is pretty straight forward. I happen to be working on a website redesign (in Drupal) which presents Linguistic resources both published and unpublished. I recently came across two specialized SEO options which are useful:
- Integration with Google Scholar
- Aggregation with OLAC
Google Scholar’s page on getting data into Google Scholar:http://scholar.google.com/intl/en/scholar/inclusion.html
This blog also has an interesting write up: http://blog.reallywow.com/archives/123
This means implementing the OAI-PMH protocol so that OLAC can harvest it.
I am not sure how this is done exactly… but here is the link: http://www.language-archives.org/.
I have been a WordPress fan since 2005. I have run several sites using WordPress simultaneously since then. Running WordPress is dead easy. I can wrap my head around it. This past January, a colleague was ecstatic about the release of Drupal 7. I was a bit less ecstatic. (More the I'm glad you are excited, kind of guy.) Then I saw the new admin interface and my interest piqued. So I downloaded a few modules and bam! I saw the power. Amazing. Totally a reckless learning curve but still something beautiful.
My story was much the same as Kevin Dees. This fall I went to Drupal Camp Austin and was able to wrap my head around a few more things. (Mostly things which showed me there was still a lot to learn.) So from time to time you will see that I will post some things I am learning about Drupal.
Drush for WordPress
While I was at Austin I kept hearing about Drush. Then when I got back home I resized that I needed to download a lot of modules to work on a particular web site. I could do this several times or I could learn to use Drush with Drush Make. Drush is a command line shell and scripting interface for Drupal. Once I found the power of it I started looking for something similar in WordPress. I don't think there is anything exactly like Drush but there are two projects worth checking out check out:
However it does not seem that there is a Drush Make for WordPress. Although there has been some thought about how to make Drush Make "cross-platform" and work with other CMSes like WordPress. Wouldn't it be nice if WordPress developers got handed a tool from the Drupal community....
Last year I wrote about Selected Works™ & BePress because I was looking at how SIL International might best display the professional abilities of their personnel. This means putting their CV’s and past project activity in an accessible portfolio. I have also been looking at apps like Bibapp, which pulls info from DSpace. Since sil.org is looking at Drupal as a CMS I recently ran across Open Scholar, with an example by harvard.
This is part of what I learned at Drupal Camp Austin 2011.
Image from http://twitpic.com/3pvrmw/full.
Because I have been on the team doing the SIL.org redesign, I have been looking at the Open Source landscape looking at what is available to connect Drupal with DSpace data stores. We are planning on making DSpace the back-end repository, with another CMS running the presentation and interactive layers. I found a module which parses DSpace's XML feeds in development. However, this is not the only thing that I am looking at. I am also looking at how we might deploy Omeka. Presenting the entire contents of a Digital Language and Culture Archive, and citations for their physical contents is no small task. In addition to past content there is also future content. That is to say archiving is also not devoid of publishing - so there is also the PKP project [sic redundant]. (SIL also currently has a publishing house, whose content need CSV or version control and editorial workflows, which interact with archiving and presentation functions.)
Wally Grotophorst has a really good reflection on Omeaka and DSpace, I am not sure that it is current but it does present the problem space quite well. Tom Scheinfeldt at Omeka also has a nice write up on why Omeka exists, titled "Omeka and It's peers". It is really important to understand Omeka's place in the eco system of content delivery to content consumers by qualified site administrators.
@Mire talks about What DSpace could learn from Omeka.
Dspace Mailing list discussion discussing some DSpace technologies for mixing with OAI-ORE and Fedora, Omeka, and Drupal.
I have been looking for a decent coding application for OS X. I don’t do it fulltime. And I want something intuitive to use, simple to discover the workflows in, and has syntax highlighting. I do CSS, xHtml and am getting into some PHP. I don’t favor Aquamacs‘ command-line-like interface when saving documents.
I have had a few recommended to me:
I have been looking at developing some plugins/themes for Drupal (modules) and for WordPress. Being at DrupalCamp Austin 2011.
As part of my job I work with materials created by the company I work for, that is the archived materials. We have several collections of photos by people from around the world. In fact we might have as many as 40,000 photos, slides, and Negatives. Unfortunately most of these images have no Meta-data associated with them. It just happens to be the case that many of the retirees from our company still live around or volunteer in the offices. Much of the meta-data for these images lives in the minds of these retirees. Each image tells a story. As an archivist I want to be able to tell that story to many people. I do not know what that story is. I need to be able to sit down and listen to that story and make the notes on each photo. This is time consuming. More time consuming than I have.
Here is the Data I need to minimally collect:
Photo ID Number: ______________________________
Who (photographer): ____________________________
Who (subject): ________________________________
When (was the photo taken): _______________________
Where (Country): _______________________________
Where (City): _________________________________
Where (Place): ________________________________
What is in the Photo: ____________________________
Why was the photo taken (At what event):_________________________
Photo Description:__short story or caption___
Who (provided the Meta-data): _________________________
Here is my idea: Have 2 volunteers with iPads sit down with the retirees and show these pictures on the iPads to the retirees and then start collecting the data. The iPad app needs to be able to display the photos and then be able to allow the user to answer the above questions quickly and easily.
The iPad is only the first step though. The iPad works in one-on-one sessions working with one person at a time. Part of the overall strategy needs to be a cloud sourcing effort of meta-data collection. To implement this there needs to be central point of access where interested parties can have a many to one relationship with the content. This community added meta-data may have to be kept in a separate taxonomy until it can be verified by a curator, but there should be no reason that this community added meta-data can not be expected to be valid.
However, what the app needs to do is more inline with MetaEditor 3.0. MetaEditor actually edits the IPTC tags in the photos – Allowing the meta-data to travel with the images.In one sense adding meta-data to an image is annotating an image. But this is something completely different than what Photo Annotate does to images.
Photosmith seems to be a move in the right direction, but it is focused on working with Lightroom. Not with a social media platform like Gallery2 & Gallery3, Flickr or CopperMine.While looking at open source photo CMS’s one of the things we have to be aware of is that meta-data needs to come back to the archive in a doublin core “markup”. That is it needs to be mapped and integrated with our current DC aware meta-data scehma. So I looked into modules that make Gallery and Drupal “DC aware”. One of the challenges is that there are many photo management modules for drupal. None of them will do all we want and some of them will do what we want more elegantly (in a Code is Poetry sense). In drupal it is possible that several modules might do what we want. But what is still needed is a theme which elegantly, and intuitively pulls together the users, the content, the questions and the answers. No theme will do what we want out of the box. This is where Form, Function, Design and Development all come together – and each case, especially ours is unique.
- Adding Dublin Core Metadata to Drupal
- Dublin Core to Gallery2 Image Mapping
- Galleries in Drupal
- A Potential Gallery module for drupal – Node Gallery
- Embedding Gallery 3 into Drupal
- Embedding Gallery 2 into Drupal
This, cloud sourcing of meta-data model has been implemented by the Library of Congress in the Chronicling America project. Where the Library of Congress is putting images out on Flickr and the public is annotating (or “enriching” or “tagging” ) them. Flickr has something called Machine Tags, which are also used to enrich the content.
There are two challenges though which still remain:
- How do we sync offline iPad enriched photos with online hosted images?
- How do we sync the public face of the hosted images to the authoritative source for the images in the archive’s files?
This post is a open draft! It might be updated at any time… But was last updated on < ?php the_modified_date() ?> at < ?php the_modified_time()?>.
Meta-data is not just for Archives
Bringing the usefulness of meta-data to the language project workflow
It has recently come to my attention that there is a challenge when considering the need for a network accessible file management solution during a language documentation project. This comes with my first introduction to linguistic field experience and my first field setting for a language documentation project.The project I was involved with was documenting 4 Languages in the same language family. The Location was in Mexico. We had high-speed Internet, and a Local Area Network. Stable electric (more than not). The heart of the language communities were a 2-3 hour drive from where we were staying, so we could make trips to different villages in the language community, and there were language consultants coming to us from various villages. Those consultants who came to us were computer literate and were capable of writing in their language. The methods of the documentation project was motivated along the lines of: “we want to know ‘xyz’ so we can write a paper about ‘xyz’ so lets elicit things about ‘xyz'”. In a sense, the project was product oriented rather than (anthropological) framework oriented. We had a recording booth. Our consultants could log into a Google-doc and fill out a paradigm, we could run the list of words given to us through the Google-doc to a word processor and create a list to be recorded. Give that list to the recording technician and then produce a recorded list. Our consultants could also create a story, and often did and then we would help them to revise it and record it. We had Geo-Social data from the Mexican government census. We had Geo-spacial data from our own GPS units. During the corse of the project massive amounts of data were created in a wide variety of formats. Additionally, in the case of this project language description is happening concurrently with language documentation. The result is that additional data is desired and generated. That is, language documentation and language description feed each other in a symbiotic relationship. Description helps us understand why this language is so important to document and which data to get, documenting it gives us the data for doing analysis to describe the language. The challenge has been how do we organize the data in meaningful and useful ways for current work and future work (archiving)?People are evidently doing it, all over the world… maybe I just need to know how they are doing it. In our project there were two opposing needs for the data:
- Data organization for archiving.
- Data organization for current use in analysis and evaluation of what else to document.It could be argued that a well planned corpus would eliminate, or reduce the need for flexibility to decide what else there is to document. This line of thought does have its merits. But flexibility is needed by those people who do not try to implement detailed plans.