iPhoto is Apple's default photo management solution. I have used it since early 2004 when I purchased my first Mac. I currently run OS X 10.6.8. and iPhoto '09 (iPhoto version 8.1.2 Build 424). In late 2013, this is considered an old version of the OS and an old version of iPhoto. I have seen more recent versions of iPhoto as my wife runs 10.7 and a newer version of iPhoto.
In the spring of 2012 I purchased a Cannon t3i and started to shoot RAW. (Read large photo size and editable images.) So, I need a photo editing solution with more power than iPhoto. My iPhoto collection was also starting to wax big approaching 28,000 images at the time (and why not after 9 years of collecting photos).
iPhoto is a brilliant way to browse photos and gives great access to simple tools to crop, rotate, and apply redeye reduction. However, iPhoto has a weakness when it comes to embedded metadata. If you want to export your photo, with geo-tagged location, and with keywords applied then one needed to export the photo as a .jpg. And one could not apply these metadata "enrichments" to the original photo file type. iPhoto's "Export Original" is just that, the original file, not the original plus added metadata.
I have a PDF that I would like to crop to text and then add consistent white space (margin). The PDF was generated by a Bookeye 4 scanner. Which exported the content straight to PDF. So, I am trying to do this with Adobe Acrobat 9.2. SIL Americas Area Publishing suggested that I use ScanTailor - An excellent program, but one which I find crashes on OS X.
I have been reading this blog, and its links. About scanning family photos.
I think one relevant point that it brings out for professional contexts like is: "Additionally, it [the way of naming the files] will now be documented for whomever inherits all of your work, that you didn’t know this information."
Especially with photos, which have a high metadata field to file-count ratio, representing a high curation workload. A metadata schema needs to help archivists administratively determine known unknowns rather than just empty elements. It is one thing to choose not to describe something, it is another to not have access to the information and be unable to describe something.
Continue reading →
Some days I am more clever than others. Today, I was working on digitizing about 50 older (30 years old) cassettes for a linguist. To organize the data I have need of creating a folder for each tape. Each folder needs to be sequentially numbered. It is a lot of tedious work - not something I enjoy.
So I looked up a few things in terminal to see if I could speed up the process. I needed to create a few folders so I looked up on hints MacWorld:
So I looked at the mkdir command, which creates new folders or directories. It uses the following syntax: mkdir folder1 folder2 folder3
Now I needed a list of the folders I needed... something like 50.
Now I had a list of 50 names of my folders, but I still needed to remove the return characters which separated them from each other to allow the mkdir command to work. So I opened up TextEdit and did a search for return tabs in the document and deleted them.
Now I could just paste the 50 folder names in terminal and hit enter and it created 50 folders... But I wonder if there was a way to add sequential numbers to a base folder-name in terminal without using google spreadsheets...
Two times since the launch of the new SIL.org website colleagues of mine have contacted me about the new requirement on SIL.org to log-in before downloading content from the SIL Language and Culture Archive. Both know that I relate to the website implementation team. I feel as if they expect me to be able to speak into this situation (as if I even have this sort of power) - I only work with the team in a loose affiliation (from a different sub-group within SIL), I don't make design decisions, social impact decisions, or negotiate the politics of content distribution.
However, I think there are some real concerns by web-users users about being required to log-in prior to downloading, and some real considerations which are not being realized by web-users.
As linguistics and language documentation interface with digital humanities there has been a lot of effort to time-align texts and audio/video materials. At one level this is rather trivial to do and has the backing of comercial media processes like subtitles in movies. However, at another level this task is often done in XML for every project (digital corpus curation) slightly differently. At the macro-scale the argument is that if the annotation of the audio is in XML and someone wants to do something else with it, then they can just convert the XML to whatever schema they desire. This is true.
However, one antidotal point that I have not heard in discussion of time aligned texts is specifications for Audio Dominant Text vs. Text Dominant Audio. This may not initially seem very important, so let me explain what I mean. Continue reading →
I have been working on describing the FLEx software eco-system (for both a blog post and an info-graphic). In the process I googled "language documentation" workflow and was promptly directed to resources created for InField and aggregated via ctldc.org. An amazing set of resources. the ctldc.org website is well put together and the content from InField 2010 and 2008 is amazing - I which I could have been there. I am almost convinced that most SIL staff pursuing linguistic fieldwork should just go to InField... But it is true that InField seems to be targeted at someone who has had more than one semester of linguistics training.
I feel that in the language and culture documentation community that there is a tension between “documenting” and “globalizing”. In the sense that what we as digital natives and cultural technologists think is “living” is in part “documenting”.
Now, in some sense “Language Documentation” is an academic pursuit of its own right independent of linguistics if it has a plan and tries to capture elements of the expression of the culture and language as it is spoken or acted out. I think there is a bit of confusion in the literature as linguists move from linguistics to language development and community development. This is particularly evident with the use of video in language documentation. Continue reading →