SIL International has a survey service which operates across the globe in different administrative SIL units. I wonder, if the future of survey is no-longer looking at where indigenous people are living and what language variations they may have, but rather looking at where these people are going. Consider just the migrants from Nigeria according to lucify.com 89,032 Nigerians have immigrated towards Europe in the last 4 years. That is a lot of people. Where do those people come from? what languages do they speak? What linguistic load is being put on European governmental services? What could SIL offer to these governmental agencies? How could various social organizations benefit from SIL's often long standing work in the regions that these immigrants are coming from?
Just a quick thought.
Perception based loosely on facts:
A lot of language documentation money gets pushed towards endangered languages or languages with very few speakers. Is often endowed upon the aspiring academic, who may be promising to create a grammar for a previously un-written or undescribed language.
Sometimes I have the opportunity to read grammars. I read them and have questions about how the described data sounds. Both In context and as elicited. To that end I wonder if it wouldn't be money better spent for language documentation and benefit to the academy, if organizations funding language documentation research for the academy would rather fund the collection of audio texts and video texts of data already described in grammars. In a way provide the support that modern grammars should have.
That is, I find that often the state of grammars about languages (often about African languages) are so fraught with errors, or jaded with theoretical disposition, that it would be immensely helpful if these grammars were supported with audio texts. It seems that the focus on small, often dying, languages, requiring an impetus of "adequate" endangerment for funding, shows a pre-disposition to try and collect specimens of some exotic language. While the collection of rare specimens is good in some sense, it is not always the most gentrifying for the language speakers, nor is it really the most helpful for academic pursuits.
This is a quick note to record some of the things I have learned this week about working with lexical data within SIL's software options.
- There is information scattered all over the place:
- FLEx website: http://fieldworks.sil.org
- Google Group:https://groups.google.com/forum/#!forum/flex-list
- Toolbox website: http://www-01.sil.org/computIng/toolbox/
- Toolbox Google Grouphttps://groups.google.com/forum/#!forum/ShoeboxToolbox-Field-Linguists-Toolbox
- Webonary Website: http://webonary.org/
And then on Webonary about Data transfer: http://webonary.org/data-transfer/
- Wesay: http://wesay.palaso.org/
- A redundancy of the FLEx Google group: http://tiki.lingtransoft.info/tiki-view_forum_thread.php?comments_parentId=27&topics_offset=1
- Various introductions to FLEx: http://tiki.lingtransoft.info/Introduction+to+Flex?structure=Navmenu
- MDF documentation:http://www-01.sil.org/computing/shoebox/mdf.html including this PDF
- The LIFt format: https://code.google.com/p/lift-standard/
- LiftTools: http://downloads.palaso.org/LiftTools/
- xHtml expression of lift: http://pathway.sil.org/features/standards/dictionary-xhtml-proposed-standard/
- FLEx website: http://fieldworks.sil.org
- What should the purpose of the websites be? to distribute the product or to build community around the product's existence?
My friend Ibrahim Tume Ushe and I had several conversations about gestures in NW Nigeria. In these two videos he shows me some of the more common gestures and explains their meanings.
This summer (June-August) I added 629 new citation to EndNote - mostly by hand. Of those citation 392 of them had PDFs attached to the citation. I am ready to learn how to more effectively use Endnote. I estimate that I still have 450 PDFs in various folders from courses and research trips to the library over the last few years that I need to add to EndNote.
I usually try and download .ris files when I find a resource I want to cite or use. The problem is that EndNote X6 does not allow for importing more than one .ris file at a time.
To speed up the process I have learned to use the OS X Concatenate command in terminal:
I open up terminal. type
cd type drag my folder containing the .ris files I want to add to EndNote over the blinking cursor and hit enter. I then type cat and drag all the .ris files I want to concatenate to one .ris file. type a
> symbol and the new .ris file's name. The result is a concatenation of all the data from the many .ris files into one .ris file. This allows me to go back to EndNote and import all the one massive .ris file and save clicks.
In the Literacy Mega Course at SIL-UND one of the issues students are asked to consider is Environmental Print.
Sharon MacDonald presents Environmental Print as a way to move people from illiteracy (but with a understanding of contextual clues based on experience and iconicity), to literacy using or reinforcing reading lessons with print materials found around them (particularly in advertising and on manufactured goods). In this writing I will apply Sharon’s general idea to three kinds of cases. Continue reading
In this post I take a look at some of the software needs of a language documentation team. One of my ongoing concerns of linguistic software development teams (like SIL International's Palaso or LSDev, or MPI's archive software group, or a host of other niche software products adapted from main stream open-source projects) is the approach they take in communicating how to use the various elements of their software together to create useful workflows for linguists participating in field research on minority languages. Many of these software development teams do not take the approach that potential software users coming to their website want to be oriented to how these software solutions work together to solve specific problems in the language documentation problem space. Now, it is true that every language documentation program is different and will have different goals and outputs, but many of these goals are the same across projects. New users to software want to know top level organizational assumptions made by software developers. That is, they want to evaluate how software will work in a given scenario (problem space) and to understand and make informed decisions based on the eco-system that the software will lead them into. This is not too unlike users asking which is better Android or iPhone, and then deciding what works not just with a given device but where they will buy their music, their digital books, and how they will get those digital assets to a new device, when the phone they are about to buy no-longer serves them. These digital consequences are not in the mind of every consumer... but they are nonetheless real consequences.
As linguistics and language documentation interface with digital humanities there has been a lot of effort to time-align texts and audio/video materials. At one level this is rather trivial to do and has the backing of comercial media processes like subtitles in movies. However, at another level this task is often done in XML for every project (digital corpus curation) slightly differently. At the macro-scale the argument is that if the annotation of the audio is in XML and someone wants to do something else with it, then they can just convert the XML to whatever schema they desire. This is true.
However, one antidotal point that I have not heard in discussion of time aligned texts is specifications for Audio Dominant Text vs. Text Dominant Audio. This may not initially seem very important, so let me explain what I mean.