Over the last several months I have been looking for and comparing digitization services for audio, film, and for images (slides and more). I have been doing this as part of the ongoing work at the Language and Culture Archive to preserve the linguistic and cultural heritage of the people groups SIL International has encountered and served. I have not come to any hard and fast conclusions on “what is the best service provider”. This is partially because we are still looking at various out sourcing options and looking at multiple mediums is time consuming. Then there is also the issue of looking for archival standards and the creation of corporate policy for the digitization of these materials. I am presenting several names here as the results of several searches for digitization services providers.
Another option the Archive has been looking at is to determine if the the quantity of the work is cost prohibitive to have professional done. Meaning that, we would be better served by buying the equipment and doing the work in house. So in the process I have also been looking at people’s experience with various kinds of equipment and technology used in scanning.
I have been reading a lot of user stories like Dave Dyer’s reflection on Slide Transfer and MacIntouch Reader Reports from 26 April 2006 on Slide Digitization.
The last couple of weeks I have been working on applying the Zachman's framework for enterprise architecture to two projects. I have been struggling through the first row and then skipped around a bit. I think I have found the part of the project (any project) I am most passionate about.... Working with Human Interface Architecture and explaining it as a designer to the builder of the Presentation Architecture. In my mind this level needs to be closely related to the Business Process Model and to the List of Business Goals/Strategies. [1] John Zachman. 2008. Diagram of: A framework for enterprise architecture. http://zachmaninternational.com/2/Zachman_Framework.asp. [Accessed: 2 December 2011] [PDF] [Link]
Where do I see myself most helpful in the large project...
John Zachman. 2008. Diagram of: A framework for enterprise architecture. http://zachmaninternational.com/2/Zachman_Framework.asp. [Accessed: 2 December 2011] [PDF] [Link]
Because I have been on the team doing the SIL.org redesign, I have been looking at the Open Source landscape looking at what is available to connect Drupal with DSpace data stores. We are planning on making DSpace the back-end repository, with another CMS running the presentation and interactive layers. I found a module which parses DSpace's XML feeds in development. However, this is not the only thing that I am looking at. I am also looking at how we might deploy Omeka. Presenting the entire contents of a Digital Language and Culture Archive, and citations for their physical contents is no small task. In addition to past content there is also future content. That is to say archiving is also not devoid of publishing - so there is also the PKP project [sic redundant]. (SIL also currently has a publishing house, whose content need CSV or version control and editorial workflows, which interact with archiving and presentation functions.)
Omeaka
Wally Grotophorst has a really good reflection on Omeaka and DSpace, I am not sure that it is current but it does present the problem space quite well. [1]Wally Grotophorst. 4 March 2008. DSpace And Omeka. iNODE: The weblog of Digital Programs and Systems at George Mason University Libraries. http://timesync.gmu.edu/wordpress/?p=485 . [Accessed: 26 … Continue reading Tom Scheinfeldt at Omeka also has a nice write up on why Omeka exists, titled "Omeka and It's peers". It is really important to understand Omeka's place in the eco system of content delivery to content consumers by qualified site administrators. [2] Tom Scheinfeldt. 21 September 2010. Omeka and It's peers. http://omeka.org/blog/2010/09/21/omeka-and-peers/ [Accessed: 26 November 2011] [Link] [Also Posted on Tom's Blog]
@Mire talks about What DSpace could learn from Omeka. [3] @Mire. 20 May 2010. What DSpace could learn from Omeka. http://www.facebook.com/notes/mire/what-dspace-could-learn-from-omeka/393758568767 . [Accessed: 26 November 2011] [Link]
Dspace Mailing list discussion discussing some DSpace technologies for mixing with OAI-ORE and Fedora, Omeka, and Drupal.
Wally Grotophorst. 4 March 2008. DSpace And Omeka. iNODE: The weblog of Digital Programs and Systems at George Mason University Libraries. http://timesync.gmu.edu/wordpress/?p=485 . [Accessed: 26 November 2011] [Link]
Tom Scheinfeldt. 21 September 2010. Omeka and It's peers. http://omeka.org/blog/2010/09/21/omeka-and-peers/ [Accessed: 26 November 2011] [Link] [Also Posted on Tom's Blog]
@Mire. 20 May 2010. What DSpace could learn from Omeka. http://www.facebook.com/notes/mire/what-dspace-could-learn-from-omeka/393758568767 . [Accessed: 26 November 2011] [Link]
One of the things I enjoy is reading about the licenses that CC has retired. Usually they do great job of explaining why they are retiring the license. Understanding these use cases and their context is a really informative view on society.
One interesting retired license is the Sampling+ License. They did a really good job of explaining why they were retiring the license. One of the interesting exercise they talk about was how they had to go through the machine readable description to describe the license — basically mapping out the assertions.
Sound+ is interesting because it is targeted for sound. It makes me wonder if sound/audio can still be licensed under Creative Commons if it is not protected by copyright.
I have recently been reading the blog of Martin Fenner and came upon the article Personal names around the world[1] Martin Fenner. 14 August 2011. Personal names around the world. PLoS Blog Network. http://blogs.plos.org/mfenner/2011/08/14/personal-names-around-the-world . [Accessed: 16 September 2011]. [Link] . His post is in fact a reflection on a W3C paper on Personal Names around the WorldSeveral other reflections are here: http://www.w3.org/International/wiki/Personal_names (same title). This is apparently coming out of the i18n effort and is an effort to help authors and database designers make informed decisions about names on the web.
I read Martin’s post with some interest because in Language Documentation getting someone’s name as a source or for informed consent is very important (from a U.S. context). Working in a archive dealing with language materials, I see lot of names. One of the interesting situations which came to me from an Ecuadorian context was different from what I have seen in the w3.org paper or in the w3.org discussion. The naming convention went like this:
The elder was known by the younger’s name plus a relationship.
My suspicion is that it is a taboo to name the dead. So to avoid possibly naming the dead, the younger was referenced and the the relationship was invoked. This affected me in the archive as I am supposed to note who the speaker is on the recordings. In lue of the speakers name, I have the young son’s first name, who is well known in the community, and is in his 30’s or so, and I have the relationship. So in English this might sound like John’s mother. Now what am I supposed to put in the metadata record for the audio recordings I am cataloging? I do not have a name but I do have a relationship to a known (to the community) person.
I inquired with a literacy consultant who has worked in Ecuador with indigenous people for some years, she informed me that in one context she was working in everyone knew what family line they were from and all the names were derived from that family line by position. It was of such that to call someone by there name was an insult.
It sort of reminds me of this sketch by Fry and Laurie.
Martin Fenner. 14 August 2011. Personal names around the world. PLoS Blog Network. http://blogs.plos.org/mfenner/2011/08/14/personal-names-around-the-world . [Accessed: 16 September 2011]. [Link]
Working in an archive, I deal with a lot of metadata. Some of this metadata is from controlled vocabularies. Sometimes they show up in lists. Some times these controlled vocabularies can be very large, like for the names of language where there are a limited amount of languages but the amount is just over 7,000. I like to keep an eye out for how websites optimized the options for users. FaceBook, has a pretty cool feature for narrowing down the list of possible family relationships someone has to you. i.e. a sibling could be a brother/sister, step-brother/step-sister, or a half-brother/half-sister. But if the sibling is male, it can only be a brother, step-brother, or a half-brother.
FaceBook narrows the logical selection down based on atributes of the person mentioned in the relationship.
All the relationship options.
That is if I select Becky, my wife, as an person to be in a relationship with me then FaceBook determines that based on her gender atribute that she can only be referenced by the female relationships.
Menu showing just some relationships based on an atribute of the person referenced.
I have had some ideas I wanted to try out for using the iPad as a tool for collecting photo metadata. Working in a corporate archive, I have become aware of several collections of photos without much metadata associated with them.
The photos are the property of (or are in the custodial care of) the company I work at (in their corporate archive).
The subject of the photos are either of two general areas:
The minority language speaking people groups that the employees of a particular company worked with, including anthropological topics like ways of life, etc.
Photos of operational events significant to telling the story of the company holding the photos.
Archives in more modern contexts are trying to show their relevance to not only academics, but also to general members of communities. In the United States there is a whole movement of social history. There are community preservation societies which take on the task of collecting old photographs and their stories and preserving, and presenting them for future generations.
The challenge at hand is: "How do we enrich photos by adding metadata to photos in the collections of archives?" There are many solutions to this kind of task. The refining, distilling, and reduction of stories and memories to writing and even to metadata fields is no easy task, nor is it a task that one person can do on their own. One solution, which is often employed by community historians is the personal interview. By interviewing the photographers or people who were at an event and asking them questions about a series of photos it presents an atmosphere of inquisitiveness and one where the story-teller is valued because they have a story-listener. This basic personal connection allows for interactions to occur across generational and technological barriers.
The crucial question is: "How do we facilitate an interaction which is positive for all the parties involved?" The effort and thinking behind answering this question has more to do with shaping human interactions than with anything else. We are also talking about using technology in this interaction. This is true UX or (User Experience).
Past Experience
This past summer I have had several experiences with facilitating one-on-one interactions between knowledgeable parties working with photographs and with someone acting on behalf of the corporate archive. To facilitate this interaction a GoogleDoc Spreadsheet was set up and the person acting on the behalf of the archive was granted access to the spreadsheet. The individual conducting the interview and listening to the stories brought their own netbook (small laptop) from which to enter any collected data. They were also given a photo album full of photos, which the interviewee would look through. This set-up required overcoming several local environmental challenges. As discussed below, some of these challenges were better addressed than others.
Association of Data to a Given Photo
The challenge of keeping up to 150 photos organized durring an interview so that metadata about any given photo could be collected and associated with only that photo. This was addressed by adhering an inventory sticker to the back of each photo and assigning each photo a single row in the GoogleDoc Spreadsheet. Using GoogleDocs was not the ideal solution, but rather than a solution of some compromises:
Strengths of GoogleDocs
One of the great things about GoogleDocs is that the capability exists for multiple people to edit the spreadsheet simultaneously.
Another strength of GoogleDocs is that there is a side bar chat feature so that if there is a question durring the interview that help could be had very quickly from management (me, who was offsite).
The Data can be exported in the following formats: .xlsx , .xls , .csv , .pdf.
There was no cost to deploy the technology.
It is accessible through a web-browser in an OS neutral manner.
The document is available wherever the internet is available.
A single solution could be deployed and used by people digitizing photos, recording written metadata on the photos, and gathering metadata during an interview.
Most people acting on behalf of the archive were familiar with the technology.
Pitfalls of GoogleDocs
More columns exist in the spread sheet than can be practically managed (The columns are presented below in a table). There are about 48 values in a record and there are about 40,000 records.
More columns than can be practically managed
Does not display the various levels of data as levels of data as levels in the user interface.
Cannot remove unnecessary fields from the UI of various people. (No role-based support.)
Only available when there is internet.
Maximizing of Interview Time
To maximize time spent with the interviewee the photos and any metadata written or known about a photo was put into the GoogleDoc Spreadsheet prior to the interview. Sometimes this was not done by the interviewer but rather by someone else working on behalf of the archive. Durring the interview the interviewer could tell which data fields were empty by looking for the gray cells in the spreadsheet. However, just because the cells were did not mean that the interviewee was more prone to provide the desired, unknown, information.
Grey Areas Show Metadata fields which are empty
Data Input Challenges
One unanticipated challenge which was encountered in the interviews was that as the interviewer would bring out an album or two of photos that the interviewees would be able to cover more photos than the interviewer could record.
Let me spell it out. There is one interviewer and two interviewees there are 150 photos in an album lying open on the table. All three participants are looking at the photo album. The interviewee A says look that is so-and-so and then interviewee B (because the other page is closer to them) says and this is so-and-so! This happens for about 8 of the 12 facing photos. Because the interviewer is still typing the first name mentioned they ask and when do you think that was? But the metadata still comes in faster, as the second interviewee did not hear the question and the first one did but still thinking. The bottom line is that more photos are viewed and commented on faster than can be recorded.
Something that could help this process would be to in some way to slow-down (or moderate) the ability of the interviewee(s) to access the photos. Something that could synchronize the processing times with the viewing times. By scanning the photos and then displaying them on a tablet it slows down the viewing process and integrates the recording of data with the viewing of photos.
Positional Interaction Challenges
An interview is, at some level, an interaction. One question which comes up is How does the technology used affect that interaction? What we found was that a laptop usually was situated between the interviewer and the interviewees. This positioned the parties in an apposing manner. Rather than the content becoming the central focus of both parties, the content was either in front of the interviewer or in front of the interviewees. A tablet changes this dynamic in the interaction. It brings both parties together over a single set of content, both positionally and cognitively. When the photo is displayed on the laptop, the laptop has to be rotated so that the interviewees can see the image and then turned so that the interviewer can input the data. This is not the case for a tablet.
Content Management Challenges
When Paper is used for collecting metadata it is ideal to have one piece of paper for each photo. Sometimes this method is preferable to using a single computer. I used this method when I had a photo display and about 20 albums and about 200 people all filling out details at once.
People filling out metadata forms infront of a photo display.
People came and went as they pleased. When someone recognized someone or someplace they knew, they wrote down the picture ID and the info they were contributing along with their name. However, carrying around photo albums and paper there is the challenge of keeping all the photos from getting damaged, and maintaining the order of the photos and associated papers.
Connectivity Challenges
When there is no internet there is no access to GoogleDocs. We encountered this when we went to someone's apartment, expecting interent because the interent is available on campus and this apartment was also on campus. Fortunately we did have a back up plan and paper pen was used. But this means that we now had to type out the data, which was written down on the paper; in effect doing the same recording work twice.
Size of Devices
Photo albums have a certain bulk and cumbersome-ness which is multiplied when carrying more than one album at a time. Add to this a computer laptop and one might as well add to the list of required items, a hand truck with which to carry everything. A tablet is all in all a lot smaller and lighter.
[1]Alia Haley. 31 August 2011. Tablet vs. Laptop. Church Mag. [Accessed: 11 September 2011] http://churchm.ag/tablet-vs-laptop. [<a href="http://churchm.ag/tablet-vs-laptop" … Continue reading </ref></p>
" data-medium-file="https://i0.wp.com/hugh.thejourneyler.org/wordpress/wp-content/uploads/2011/09/laptop-vs-tablet-620x410.jpg?fit=300%2C198&ssl=1" data-large-file="https://i0.wp.com/hugh.thejourneyler.org/wordpress/wp-content/uploads/2011/09/laptop-vs-tablet-620x410.jpg?fit=584%2C386&ssl=1" decoding="async" loading="lazy" src="https://hugh.thejourneyler.org/wordpress/wp-content/uploads/2011/09/laptop-vs-tablet-620x410.jpg" alt="Laptop and Tablet" width="620" height="410" class="size-full wp-image-2804" srcset="https://i0.wp.com/hugh.thejourneyler.org/wordpress/wp-content/uploads/2011/09/laptop-vs-tablet-620x410.jpg?resize=620%2C410&ssl=1 620w, https://i0.wp.com/hugh.thejourneyler.org/wordpress/wp-content/uploads/2011/09/laptop-vs-tablet-620x410.jpg?resize=300%2C198&ssl=1 300w" sizes="(max-width: 584px) 100vw, 584px" />
Laptop and TabletThis image is credited to Alia Haley [2] Alia Haley. 31 August 2011. Tablet vs. Laptop. Church Mag. [Accessed: 11 September 2011] http://churchm.ag/tablet-vs-laptop. [Link]
Proof of Concept Technology
As I mentioned before, I had an iPad in my possession for a few days. So to capitalize on the opportunity, I bought a few apps from the app store, as I mentioned that I would and tried them out.
Software which does not work for our purposes
Photoforge2
The first app I tried was Photoforge2. It is a highly rated app in the app store. I found that it delivered as promised. One could add or edit the IPTC and EXIF metadata. One could even edit where the photo was taken with a pin drop interface.
iPad Fotoforge Location Data
iPad Fotoforge Metadata Editor
Meta Editor
Meta Editor, another iPad app, which was also highly acclaimed performed task almost as well. Photoforge2 had some photo editing features which were not needed in our project. Whereas Meta Editor was focused only on metadata elements.
MetadataEditor Location Data
After using both applications it became apparent that neither would work for this project for at least two reasons:
Both applications edit the Standards based IPTC and EXIF metadata fields in photos. We have some custom metadata which does not fit into either of these fileds.One aspect of the technology being discussed, which might be helpful for readers to understand, is that these iPad applications actually embed the metadata into the photos. So when the photos are then taken off of the iPad the metadata travels with them. This is a desirable feature for presentation photos.
Even if we do embed the metadata with these apps the version of the photo being enriched is not the Archival version of the photo it is the Presentation version of the photo. We still need the data to become associated with the archival version of the photo.
Software with some really functional features
So we needed something with a mechanism for capturing our customized data. Two options were found which seemed to avail themselves as suitable for the task. One was ideal the other rapidly deployable. Understanding the iPads' place in the larger place of corporate architecture, relationship to the digital repository, the process of data flow from the point of collection to dissemination, will help us to visualize the particular challenges that the iPad presents solutions for. Once we see where the iPad sits in relationship to the rest of the digital landscape I think it will be fairly obvious why one solution is ideal and the other rapidly deployable.
Placement in the iPad in the Information Architecture Picture
In my previous post on Social Metadata Collection [3]Hugh J. Paterson III. 29 June 2011. The Journeyler. [Accessed: 13 September 2011] https://hugh.thejourneyler.org/social-meta-data-collection. [Link] I used the below image to show where the iPad was used in the metadata collection process.
Meta-data Collection Model
Since that time, as I have shown this image when I talk about this idea, I have become aware that the image is not detailed enough. Because it is not detailed enough it can lead to some wrong assumptions on how the iPad use being proposed actually works. So, I am presenting a new image with a greater level of detail to show how the iPad interacts with other corporate systems and workflows.
iPad Team as they fit with other digital elements.
There are several things to note here:
Member Disporia as represented here is not just members, it is their families, the people with whom these members worked, it is the members currently working and it the members living close at hand on campus, not just in disporia.
It is a copy of the presentation file which is pushed out to the iPad or the website for the Member Disporia. This copy of the file does not necessarily need to be brought back to the archive as long as the metadata is synced back appropriately.
The Institutional Repository for other corporate items is currently in a DSpace instance. However, it has not been decided for sure that photos will be housed in this same instance, or even in DSpace.
That said, it is important that the metadata be embedded in the presentation file of the image, as well as accessible to the Main container for the archival of the photos. The metadata also needs to sync between the iPad application and the Member Diaspora website. Metadata truly needs to flow through the entire system.
FileMaker Pro with File Maker Go
FileMaker Pro is a powerful database app. It could drive the Member Disporia website and then also sync with the iPad. This would be a one-stop solution and therefore and ideal solution. It is also complex and takes more skill to set up than I currently have, or I can currently spare to acquire. Both FileMaker Pro and its younger cousin Bento enable Photos to be embedded in the actual database.Several tips from the Bento forums on syncing photos which are part of the database: Syncing pictures from Bento-Mac to Bento-iPad Sync multiple photos or files from desktop to IPad This is something which is important with regards to syncing with the iPad. To the best of my knowledge (and googling) no other database apps for the iPad or Android platforms allow for the syncing of photos within the app.
Rapid reuse of data. Because the interview process naturally lends itself to eliciting the same kind of data over a multitude of photos a UX/UI element which allows the rapid reuse of data would be very practical. The kinds of data which would lend themselves to rapid reuse would be peoples' names, locations, dates, photographer, etc. This may mean being able to query a table of already input'd data values with an auto-suggest type function.
Custom iPad App
Of course there is also the option to develop a custom iPad app for just our purposes. This entails some other kinds of planning, including but not limited to:
Custom App development
Support plan
Deploy or develop possible Web-backend - if needed.
Kinds of custom metadata being collected.
The table in this section shows the kinds of questions we are asking in our interviews. It is not only provided for reference as a discussion of the Information Architecture for the storage and elements of the metadata schema is out of the scope of this discussion. The list of questions and values presented in the table was derived as a minimal set of questions based on issues of Image Workflow Processing, Intelectual Property and Permissions, Academic Merit and input from the controlled vocabulary's Caption and Keywording Guidelines[4] Controlled Vocabulary. Caption and Keywording Guidelines. [Accessed: 13 September 2011] http://www.controlledvocabulary.com/metalogging/ck_guidelines.html. [Link] which is part of their series on metalogging. The table also shows corresponding IPTC, and EXIF data fields. (Though they are currently empty because I have not filed them in.) Understanding the relationships of XMP, IPTC, and EXIF also help us to understand why and how the iPad tool needs to interact with other Archiving solutions. However, it is not within the scope of this post to discuss these differences.Some useful resources on these issues are noted here:
Photolinker Metadata Tags [5] Early Innovations, LLC. 2011. Photolinker Metadata Tags. [Accessed: 13 September 2011] http://www.earlyinnovations.com/photolinker/metadata-tags.html. [Link] has a nice display outlining where XMP, IPTC and EXIF data overlap. This is not authoritative, but rather practical.
List of IPTC fields: List of IPTC fields. However, a list is not enough we also need to know what they mean so that we know that we are using them correctly.
EXIF and IPTC Header Comments. Here is another list of IPTC fileds. This list also includes a list of list of EXIF fileds. (Again without definitions.)
It is sufficient to note that there is some, and only some overlap.
Metadata Element
Purpose
Explanation
Doublin Core
IPTC Tags
EXIF Tags
Photo Collection
This is the name of the collection in which the photos reside
Sub Collection
This is the name of the sub collection in which the photos reside
Letter of Collection
Each collection is given an alpha character or a series of alpha characters, if the collection pertains to one people group then the alpha characters given to that collection are the three digit ISO 639-3 code
Who input the Meta-data
This is the name of the person inputting the metadata
Photo Number
This is the number of the photo as we have inventoried the photo
Negative Number
This is the number of the photo as it appears on the negative (film strip)
Roll
This is the ID of the Roll
Most sets of negatives are cut into strips of 5 or less this allows us to group these sets together to ID a “set” of photos
Section Number
If the items are in a book or a scrap book and that scrap book has a section this is where that is recoreded
Page#
If a scrap book has a set of pages then this is where they are recoreded
Duplicates
This is where the Photo ID of a duplicate item is referenced.
Old Inventory Number(s)
This is the inventory number of an item if it were part of another invenotry system
Photographer
This is the name of the photographer
Subject 1 (who)
Who is in the photo, this should be an unlimited field. That is sveral names should be able to be added to this.
Subject 2
Who is in the photo, this should be an unlimited field. That is sveral names should be able to be added to this.
Subject 3
Who is in the photo, this should be an unlimited field. That is sveral names should be able to be added to this.
Subject 4
Who is in the photo, this should be an unlimited field. That is sveral names should be able to be added to this.
Subject 5
Who is in the photo, this should be an unlimited field. That is sveral names should be able to be added to this.
People group
This is the name of the people group meneined in the ISO 639-3 codes
ISO 639-3 Code
This is the ISO 639-3 code of the people group being photographed
When was the photo Taken?
The date the photo was taken
Country
The country in which the photo was taken
District/City
This is the City where the photo was taken
Exact Place
The exact place name where the photo was taken
What is in the Photo (what)
This is an item in the photo
What is in the Photo
Additional what is in the photo
What is in the Photo
Addtional what is in the photo
Why was the Photo Taken?
This is to help metadata providers think about how events get communicated
Description
This is a description of the photo’s contents
This is not a caption but could be used as a caption
Who Provided This Meta-Data? And when?
We need to keep track of who is the source of certain metadata to understand its authority
Who Provided This Meta-Data? And when?
We need to keep track of who is the source of certain metadata to understand its authority
Who Provided This Meta-Data? And when?
We need to keep track of who is the source of certain metadata to understand its authority
Who Provided This Meta-Data? And when?
We need to keep track of who is the source of certain metadata to understand its authority
Who Provided This Meta-Data? And when?
We need to keep track of who is the source of certain metadata to understand its authority
I am in this photo and I approve it to be on the internet. Put in "yes" or "No" and write your name in the next column.
Permission to distribute
Name:
Name of the person releasing the photo
How was this photo digitized?
Method of digitization and the tools used in digitization
Who digitized This photo
This is the name of the person who did the digitization
Alia Haley. 31 August 2011. Tablet vs. Laptop. Church Mag. [Accessed: 11 September 2011] http://churchm.ag/tablet-vs-laptop. [<a href="http://churchm.ag/tablet-vs-laptop" title="Tablet vs Laptop">Link</a>]
David Riecks. 2010. IPTC Standard Photo Metadata (July 2010).
International Press Telecommunications Council. [Accessed: 13 September 2011] http://www.iptc.org/std/photometadata/documentation/IPTC-PLUS-Metadata-Panel-UserGuide_6.pdf [Link]
Being interested in Web-design, and issues of culture and art as it varies by cultural influences I have had four nagging questions. Each of these questions probably deserves more thought than I can give them right now.
What is the impact of cultural attitudes towards web (the internet) on web-design?
What are the impacts of cultural information processing on web-design?
Why do official Mexico websites have blinking boxes and blinking text? Why does that work for them? Is this deeply seeded in the history of computers or is this deeply seeded in attitudes of Mexican government officials?
This truly does deserve a link. I wish I had written this down when I had thought about it. It was 2010 when I had come across this site.
What are the effects of cultural design impulses on information processing?
The importance of knowing about the Datum[1]Wikipedia contributors. Datum (geodesy). Wikipedia, The Free Encyclopedia. 3 April 2011, 00:28 UTC. Available at: http://en.wikipedia.org/w/index.php?title=Datum_(geodesy)&oldid=422063702. [Accessed … Continue reading recently came to my attention as I was working with GIS data on a Language Documentation project. We were collecting GPS coordinates with a handheld GPS unit and comparing these coordinates with data supplied by the national cartographic office. End goal was to compare data samples collected with conclusions proposed by the national cartographic office.
So, what am I talking about?
GIS data is used in a Geographical Information System. Basically, you can think of maps and what you might want to show with a map: rivers, towns, roads, language features, dialect markers, etc. Well, maps are shapes superimposed with a grid. And coordinates are a way of naming where on a particular grid a given point is located.
Wikipedia contributors. Datum (geodesy). Wikipedia, The Free Encyclopedia. 3 April 2011, 00:28 UTC. Available at: http://en.wikipedia.org/w/index.php?title=Datum_(geodesy)&oldid=422063702. [Accessed 5 May 2011] [Link]
Working in an archive, one can imagine that letting go of materials is a real challenge. Both in that it is hard to do becasue of policy, but also because it is hard to do because of the emotional “pack-rat” nature of archivist. This is no less the case of the archive where I work. We were recently working through a set of items and getting rid of the duplicates. (Physical space has its price; and the work should soon be available via JASOR.) However, one of the items we were getting rid of was a journal issue on a people group/language. The journal has three articles, of these, only one of them article was written by someone who worked for the same organization I am working for now. So the “employer” and owner-operator of the archive only has rights to one of the three works. (Rights by virtue of “work-for-hire” laws.) We have the the off-print, which is what we have rights to share, so we keep and share that. It all makes sense. However, what we keep is catalogued and inventoried. Our catalogue is shared with the world via OLAC. With this tool someone can search for a resource on a language, by language. It occurs to me that the other two articles on this people group/language will not show in the aggregation of results of OLAC. This is a shame as it would be really helpful in many ways. I wish there was a groundswell, open source, grassroots web facilitated effort where various researchers can go and put metadata (citations) of articles and then they would be added to the OLAC search.