Reflections on CRASSH

In July I presented a paper at CRASSH in Cambridge. It was a small conference, but being in Europe it was good to see many of the various kinds of projects which are going on in Digital Humanities and Linguists, or also Cloud Computing and Linguistics. One particular project, TypeCraft, stands out as being rather well done and promising was presented by Dorothee Beermann Hellan. I think the ideas presented in this project are well thought out and seem to be well implemented. It would be nice to see this product integrated with some other linguistics and language documentation cloud offerings. i.e. Project Lego from the Linguist’s List or the Max Planck Institute’s LEXUS project. While TypeCraft does allow for round tripping of data with XML, what I am talking about is a consolidated User Experience for both professional linguists and for Minority language users.

A note on foundational technologies:

  • It appears that Lexus is is built on BaseX with Cocoon and XML.
  • The front page of TypeCraft has a very Wikipedia like feel, but this might not be the true foundational technology.
  • Linguist’s List often does their work in ColdFusion and the LEGO project definitely has this feel about it.

Remoteness Index

For the last few weeks I have been thinking about how can one measure the impact on a language due to a language communities' contact with other languages. I have been looking for ways that remoteness has been measured in the past. I recently ran across a note on my iPhone from when I was in Mexico dated March 8, 2011.

A metric for measuring the language language shift, contact, and relatedness of indigenous languages of Mexico

  • The formation of aerial features
  • Population density
  • Trade and social networks
  • Political affiliation
  • Geographic factors
  • Roads travel opportunities

I remember writing this note: I was standing in front of a topographical map showing terrain regions. This map also had the language areas of Mexico outlined. It occurred to me (having also recently had a conversation with a local anthropologist on the matter of trade routes and mountain passes) that as a factor in language endangerment that these sorts of factors should be accounted for and if it can be accounted for then it should also be able to be graphed (on a map of course). The major issue being that if one just plots a language area without showing population/speaker density in that area then the viewer of that map will get a warped view of the language situation. Population density also does not solely infer where language attrition will likely not occur. And language contact does not automatically happen on the edges of a language area. That is to say, in a country with mountain passes, there will likely be more language contact in the passes as various groups travel to market than in higher elevated mountain villages. This leads to the issue of language diffusion and the representation of language diffusion. But the issue is not just one of language diffusion, it is also one of population diffusion, and population mobility and accessibility to various areas. So in terms of projecting, assessing and plotting language vitality, considering remoteness should be part of the equation. But remoteness is not just a factor on its own, it is more of an index considering the issues mentioned above but specifically considering the issues of geographical remoteness and considering the issues of social remoteness (or contact, even with other villages and cities in the same language and ethnic communities).

I am not currently aware of any index, much less a project which plots this index to a geographical area. However, I have found some previous work worth mentioning which might be related and relevant.

Modeling Language Diffusion With ArcGIS

There is an interesting paper and project on modeling language diffusion with ArcGIS. It was prepared for by Christopher Deckert in 2004 and presented at the 24th ESRI users conference.

Remote Areas of the World

The magazine NewScientist has an article from April 2009 about the Remotes places in the world it has several maps and abstractions showing how remote (with reference to travel time) places in the world are. The following maps come from the NewScientist article.

Map showing the access ability from one point to another.

Map showing the access ability from one point to another.

Detail of roads in west Africa

Detail of roads in west Africa

Nowhere three weeks from anywhere

Map showing the remoteness of the Tibetan Plateau

The ASGC Remoteness Structure

Another promising resource I found is the ASGC Remoteness Structure which Australia has developed to show how remote parts of Australia are. There is a series of papers explaining the methods behind the algorithms used and the purpose of the study. One of the outputs was the map below.

Australia Remoteness map

Australia Remoteness Map

The Territoriality of Public Health Governance in Mexico

The last resource I am going to mention here is The Territoriality of Public Health Governance in Mexico. A study which plots the Remoteness of Health Care in Mexico.

The Job

Today several people are getting together to have a meeting about my job(s)… So I thought I would post a few diagrams to try and explain my job(s).

Hugh's Life

A digram of different areas of my life

The core Area of my Job

My Core Area of involvement

The Core Things I am involved in

Some of the outside things I am involved in:

The Details

The Detailed Stuff

A Story Breeds A Story

While I was in Malaysia, I had the honor to meet and talk to quite a bit with Professor Emeritus Howard McKaughan. We talked a about his linguistics based work in Mexico, the Philippines, and in Malaysia. He can tell stories, interesting stories.

Howard - Story Telling

Howard - Story Telling

There is something unique about his generation of Americans (currently in their 80s and 90s). It is their ability to craft and tell stories. I feel that this is a cultural point I don’t have. It could be because I am third culture, or because I talk to much of the macro-details, or it might simply be because I am long winded.
Continue reading

Presentation version vs. Archival version of Digital Audio files

What is an archival version of an audio file?

An archival version of an audio file is a file which represents the original sound faithfully. In archiving we want to keep a version of the audio which can be used to make other products and also be used directly itself if needed. This is usually done through PCM. There are several file types which are associated with PCM or RAW uncompressed faithful (to the original signal) digital audio. These are:

  • Standard Wave
  • AIFF
  • Wave 64
  • Broadcast Wave Format (BWF)One way to understand the difference between audio file formats is understanding how different format are used. One place which has been helpful to me has been the DOBBIN website as they explain their software and how it can change audio from one PCM based format to another.

Each one of these file types has the flexibility to have various kinds of components. i.e. several channels of audio can be in the same file. Or one can have .wav files with different bit depths or sampling rates. But they are each a archive friendly format. Before one says that a file is suitable for archiving simply based on its file format one must also consider things like sample rates, bit depth, embedded metadata, channels in the file, etc. I was introduced to DOBBIN as an application resource for audio archivists by a presentation by Rob Poretti. One additional thing that is worth noting in terms of archival versions of digital audio pertains to born digital materials. Sometimes audio is recored directly to a lossy compressed audio format. It would be entirely appropriate to archive a born-digital filetype based on the content. However it should be noted that in this case the recordings should have been done in a PCM file format.

What is a presentation version? (of an audio file)

A presentation version is a file created with a content use in mind. There are several general characteristics of this kind of file:

  1. It is one that does not retain the whole PCM content.
  2. It is usually designed for a specific application. (Use on a portable device, or personal audio player)
  3. It can be thought of as a derivative product from an original audio or video stream.

In terms of file formats, there is not just one file format which is a presentation format. There are many formats. This is because there are many ways to use audio. For instance there are special audio file types optimized for various kinds of applications like:

  • 3G and WiFi Audio and A/V services
  • Internet audio for streaming and download
  • Digital Radio
  • Digital Satellite and Cable
  • Portable playersA brief look a an explanation by Cube-Tec might help to get the gears moving. It is part of the inspiration for this post.

This means there is a long list of potential audio formats for the presentation form.

  • AAC (aac)
  • AC3 (ac3)
  • Amiga IFF/SVX8/SV16 (iff)
  • Apple/SGI (aiff/aifc)
  • Audio Visual Research (avr)
  • Berkeley/IRCAM/CARL (irca)
  • CDXA, like Video-CD (dat)
  • DTS (dts)
  • DVD-Video (ifo)
  • Ensoniq PARIS (paf)
  • FastTracker2 Extended (xi)
  • Flac (flac)
  • Matlab (mat)
  • Matroska (mkv/mka/mks)
  • Midi Sample dump Format (sds)
  • Monkey’s Audio (ape/mac)
  • Mpeg 1&2 container (mpeg/mpg/vob)
  • Mpeg 4 container (mp4)
  • Mpeg audio specific (mp2/mp3)
  • Mpeg video specific (mpgv/mpv/m1v/m2v)
  • Ogg (ogg/ogm)
  • Portable Voice format (pvf)
  • Quicktime (qt/mov)
  • Real (rm/rmvb/ra)
  • Riff (avi/wav)
  • Sound Designer 2 (sd2)
  • Sun/NeXT (au)
  • Windows Media (asf/wma/wmv)

Aside from just the file format difference in media files (.wav vs. .mp3) there are three other differences to be aware of:

  1. Media stream quality variations
  2. Media container formats
  3. Possibilities with embedded metadata

Media stream quality variations

Within the same file type there might be a variation of quality of audio. For instance Mp3 files can have a variable rate encoding or they can have a steady rate of encoding. When they have a steady rate of encoding they can have a High or a low rate of encoding. WAV files can also have a high or a low bit depth and a high or a low sample rate. Some file types can have more channels than others. For instance AAC files can have up to 48 channels where as Mp3 files can only have up to 5.1 channels.

One argument I have heard in favor of saving disk space is to use lossless compression rather than WAV files for archive quality (and as archive version) recordings. As far as archiving is concerned, these lossless compression formats are still product oriented file formats. One thing to realize is that not every file format can hold the same kind of audio. Some formats have limits on the bit depth of the samples they can contain, or they have a limit on the number of audio channels they can have in a file. This is demonstrated in the table below, taken from wikipedia. This is where understanding the relationship between a file format, a file extension and a media container format is really important.

Audio compression formatAlgorithmSample RateBits per sampleLatencyStereoMultichannel
ALACLossless44.1 kHz to 192 kHz16, 24[41]?YesYes
FLACLossless1 Hz to 655350 Hz8, 16, 20, 24, (32)4.3ms - 92ms (46.4ms typical)YesYes: Up to 8 channels
Monkey's AudioLossless8, 11.025, 12, 16, 22.05, 24, 32, 44.1, 48 kHz??YesNo
RealAudio LosslessLosslessVaries (see article)Varies (see article)VariesYesYes: Up to 6 channels
True AudioLossless0–4 GHz1 to > 64?YesYes: Up to 65535 channels
WavPack LosslessLossless, Hybrid1 Hz to 16.777216 MHzvaries in lossless mode; 2.2 minimum in lossy mode?YesYes: Up to 256 channels
Windows Media Audio LosslessLossless8, 11.025, 16, 22.05, 32, 44.1, 48, 88.2, 96 kHz16, 24>100msYesYes:Up to 6 channels

Media container formats

Media container formats can look like file types but they really are containers of file types (think like a folder with an extension). Often they allow for the bundling of audio and video files with metadata and then enable this set of data to act like a single file. On wikipedia there is a really nicecomparison of container formats.

MP4 is one such container format. Apple Lossless data is stored within an MP4 container with the filename extension .m4a – this extension is also used by Apple for AAC audio data in an MP4 container (same container, different audio encoding). However, Apple Lossless is not a variant of AAC (which is a lossy format), but rather a distinct lossless format that uses linear prediction similar to other lossless codecs such as FLAC and Shorten. Files with a .m4a generally do not have a video stream even though MP4 containers can also have a video stream.

MP4 can contain:

  • Video: MPEG-4 Part 10 (H.264) and MPEG-4 Part 2
    Other compression formats are less used: MPEG-2 and MPEG-1
  • Audio: Advanced Audio Coding (AAC)
    Also MPEG-4 Part 3 audio objects, such as Audio Lossless Coding (ALS), Scalable Lossless Coding (SLS), MP3, MPEG-1 Audio Layer II (MP2), MPEG-1 Audio Layer I (MP1), CELP, HVXC (speech), TwinVQ, Text To Speech Interface (TTSI) and Structured Audio Orchestra Language (SAOL)
    Other compression formats are less used: Apple Lossless
  • Subtitles: MPEG-4 Timed Text (also known as 3GPP Timed Text).
    Nero Digital uses DVD Video subtitles in MP4 files

This means that an .mp3 file can be contained inside of an .mp4 file. This also means that audio files are not always what they seem to be on the surface. This is why I advocate for an archive of digital files which archives for a digital publishing house to also use technical metadata as discovery metadata. Filetype is not enough to know about a file.

Possibilities with embedded metadata

Audio files also very greatly on what kinds of embedded metadata and metadata formats they support. MPEG-7, BWF and MP4 all support embedded metadata. But this does not mean that audio players in the consumer market or prosumer market respect this embedded metadata. ARSC has in interesting report on the support for embedded metadata in audio recording software. Aside from this disregard for embedded metadata there are various metadata formats which are embedded in different file types, one common type ID3, is popular with .mp3 files. But even ID3 comes in different versions.

In archiving Language and Culture Materials our complete package often includes audio but rarely is just audio. However, understanding the audio components of the complete package help us understand what it needs to look like in the archive. In my experience in working with the Language and Culture Archive most contributors are not aware of the difference between Archival and Presentation versions of audio formats and those who think they do, generally are not aware of the differences in codecs used (sometimes with the same file extension). From the archive’s perspective this is a continual point of user/submitter education. This past week have taken the time to listen to a few presentations by Audio Archivist from the 2011 ARSC convention. These in general show that the kinds of issues that I have been dealing with in the Language and Culture Archive are not unique to our context.

The Complete Audio Package

The Complete Audio Package

Language maps like heat maps

There is a myriad of difficulties in overlaying language data with geographical data. But it has be done and can be done. While I was working in México on a language documentation project, I learned that some of the language mixing (not quite diglossia, rather the living of two people groups with different languages in the same spaces) was due geographical factors and economical factors pulling them into the same geographic locations. In the particular case I am thinking of there was a mountain pass and a valley on the way to the major center of trade. In this sort of context the interesting things are displayed not when a polygon is drawn showing a territorial overlay of where various language speakers living, but where something is drawn showing what the density or population dispersion per general population is. Some of the most detailed (in terms of global perspective) language maps can be found in the Ethnologue .

Western Central Mexico from the Ethnologue

Western Central Mexico from the Ethnologue

However, as I was working on the language documentation project I found out how much effort actually goes into that sort of map. ArcGIS, the software used to create the maps can not auto-generate a polygon a certain distance around a combined set of given points. A set of points can be selected and each point can get a 5 mile radius. What this means is that each polygon has to be hand drawn. This sort of graphical overly that is used in the the Ethnologue does not show the density of speakers of a language in an area relative to the total population (in the Ethnologue’s defense I am not sure it is supposed to). For instance, if I wanted to know “What is the density of speakers in the Me’phaa area of México relative to speakers of other languages?” that would show me some dispersion, and by implication the peopling of the area. This sort of geographical overlay may be closer to displaying social networks, not really bilingualism or diglossia. There might be some bilinguals or some average level of bilingualism there, but the heat map method of plotting is looking still at the density of speakers to an area. A simular map might be created of New York City where certain languages are given a color based on their distribution density in the area. Additionally, these sorts of data overlays are probably more prone to lend insights on language attrition patterns or language speaker migration patterns. Also these hand drawn polygons change (a little) from edition to edition. Because the data used to create the polygons is not referenced (cited) it is hard to tell if the change is keeping pace with language attrition and/or population movement or if the changes are due to a better linguistic understanding in a particular area. When looking at the large area maps in the Ethnologue, it is hard to tell if the red dots represent “traditional” language area (or geographical center thereof) or if the points represent the current geographical center of the speaking area. Either way the plotting functions as if it were a heat map showing the diversity of languages over a geographical area.

Americas Map from the Ethnologue

Americas Map from the Ethnologue


I am generally on the look out for web apps and APIs which can be used to overlay data to bring new insights to situations through graphical representations. I recently found a tool for overlaying data on Google Maps. This tool creates heat maps given data from another source. This tool is called gHeat. This tool was brough to my attention by Been O’Steen as he modified gHeat to display some prices for student properties in the UK. My initial thought was: “Wow how can we do language maps like this?”

Student Property Heat Map

Student Property Heat Map

Obviously I still think that language based heat maps could prove to provide language workers world wide access to visualizations of data that could really add clarity to the language vitality situation.