Tuesday, February 17, 2009
1. I'm teaching ESL to mostly Asian students. I've had some students complain that as their English improves their native language weakens. This is mostly vocabulary loss, but I've heard complaints about grammar changes as well. Under my understanding of the current mainstream theory, this should be impossible. Once the parameters are set (or however it's thought about these days) they should be set for good.
2. The longer teachers have been teaching, the worse their English gets. One teacher in particular, who's been teaching for about five years, frequently leaves out articles from noun phrases and drops objects for obligatorily transitive verbs. These are, I would say, two of the three most noticeable and common errors that even our most advanced students make. The errors in teachers' English is clearly entirely the result of exposure to Asian ESL English.
3. The third of the three most common advanced student errors is using the wrong preposition in phrasal verbs and other verbs and nouns that require prepositions to link an object. E.g. *I gave it for my friend. Often these prepositions have no "logical" connection to their use. I've noticed from reading older novels that these "meaningless' prepositions change rapidly. Couldn't this be because they are connected by memory, not grammar?
4. Adverbs. And marginal sentences in general. There are many adverbs that can be in many places in the sentence. Most interestingly, there are often marginal places as well as good and bad places. Doesn't this sound like the meaning is okay, but we just don't normally say it that way so it sounds odd? E.g. (my judgements):
* Quickly the enemy will have destroyed the village.
?? The enemy quickly will have destroyed the village.
? The enemy will quickly have destroyed the village.
The enemy will have quickly destroyed the village.
* The enemy will have destroyed quickly the village.
The enemy will have destroyed the village quickly.
5. Language change. It should be nearly impossible to language as a whole to change under (my understanding of) the current model. Change can only come as a result of the child's error in hearing or understanding. How can it be transmitted to the community as a whole? How could young working class women lead change if grammar is set in infancy? If, on the other hand, grammar is linked to familiarity, change might be driven by what we hear around us as adults.
6. I can mimic accents. Well, I'm not very good at it, but some people are. Point being, the sounds we have access to are not set in stone, even in our native languages.
At the advice of one Matt Tucker, I'm going to read some Exemplar Theory and perhaps come back with something less hand-waving. For now, just some thoughts.
Friday, October 31, 2008
Apparently, the Welsh reads "I am not in the office at the moment. Please send any work to be translated."
As wondrous, important, and exciting language revitalisation efforts are, I am pleased to see they can lead to such glorious and absurd blunders. Somehow, this seems even better than Engrish.
For the full story...
Thursday, August 14, 2008
Monday, June 23, 2008
Since I plan to return the land of no sociolinguists for my Ph.D., I decided that while living out here in eastern Canada, I should make an effort to absorb what knowledge I can about the ‘other side’ – I decided to hop in a car to Ottawa and attend the second CVC (Change and Variation in Canada). As it is, now I feel like I can contribute a little to the issue that Meaghan brought up, and give my impressions of the conference.
So, of course, it’s true – there is quite lack of communication between the socio- and theoretical linguistics, and where I noticed this lack the most is with respect to formal semantics. While (most of) the variationist studies presented at the conference coded for linguistic factors (as well as social factors), the majority of these factors are phonological or morphosyntactic. If there is reference to semantics, it is always to lexical semantics as opposed to formal semantics. For instance, there was a great presentation by Tanya Romaniuk on the variation between the future temporal reference markers will and be going to, which used the first season of Friends as its corpus. While syntactic features like sentence mode (and other things) were coded for, as well as notions of lexical semantics (like whether a lexical meaning of ‘motion’ was significant) no notions of formal semantics were coded for. This was surprising to me since I believe that any current paper referring to will and be going to, if written from a theoretical point of view, would cite Copley 2002. Copley 2002 posits that the difference between will and be going to is a difference of aspect. She defines three different aspectual readings – bare, generic and progressive, and posits that while will can be interpreted as either bare or generic, while be going to can only be interpreted as progressive. According to Copley’s definitions, the generic and progressive readings of the future can be characterized by a formal semantic property - the Sub-interval Property (SIP) – while the bare reading cannot. I was curious as to whether the results of Tanya’s variationist study would differ if these three different aspectual contexts, and this formal semantic property, was coded for as well.
Another thing that has always popped out at me whenever sociolinguistics was concerned was the fact that the majority if sociolinguistic study is done on dialects of well-studied languages like English and French. As far as I know, there is no sociolinguistic research that takes, for example, a First Nation’s language as the object of study. The most common response I received when voicing this observation during the “Speed-date a Variationist” session (more on that later!) was that in order for someone to do research on variation (i.e., the coding part at least), it is difficult/impossible to do unless one is a native speaker of the language, and it is especially difficult/impossible if the language is understudied and few linguists have a good understanding of the grammar of the language. Although there’s really no great way of solving this problem (short of finding native speakers of these languages who are interested in social identity and statistics?), I thought it was a bit of a shame for at least two reasons. One, from a social perspective, it would be interesting to do variationist research on reserves, in that I think reserves might represent a unique kind of social situation. Second, from a linguistic perspective, languages with different grammatical structures should offer different patterns of change. To clarify that last part, I’ll bring up another example. One of the students from U of T (Derek Denis) presented about deontic modality in English, and one of the comments afterwards was that the rising usage of have to as representing deontic modality, as opposed to the older form must, might be triggered by the rising usage of must as an epistemic modal (cf. Thibault 1983, for this analysis for French). As is well known, in English, modals are lexically encoded with their strength of quantification (existential or universal), while they may vary with respect to their modal bases such that the same lexical item can act as a deontic modal, an epistemic modal, a circumstantial modal, a dispositional modal, etc. Now, Matthewson et. al (2006)  have argued that in St’at’imcets, modals arrange themselves in a different way such that the same lexical item is encoded with a modal base, but may vary with respect to the strength of quantification . Because of this, I thought it would be really interesting if it was possible to do a study on how the usage of modals in Salish change over time. If we take Matthewson et. al (2006)’s proposal as an initial assumption , then we would expect a different pattern of language change in St’at’imcets as compared to English, as ambiguity with respect to one’s modal base could not act as a trigger for change. Whether ambiguity in the strength of quantification could trigger change in another direction would be really interesting.
Abstracting away from the content of the conference, I want to say that there were a lot of aspects about the organization of the conference that I thought were really cool – the two aspects that impressed me most were the two sessions not dedicated to presentations. There was one tour of the sociolinguistics lab at uOttawa, where I thought I was going to die in organizational, colour-coded bliss. Everything was so pretty and well-organized – so inspiring! (I desperately want a colour-coded Blackfoot lab.) The second session, which at first horrified me, but in hindsight was quite cool, was the session titled “Speed-date a Variationist.” In this session all participants in the conference (including me, the covert theoretical linguist) sat in 16 four-minute speed-dates with one of the other participants. Although some parts were very awkward (i.e., the 16 times I had to explain why I was at the conference even though I’m not a variationist…), it was also an excellent chance for nearly everyone to learn a little bit about people that you would otherwise only know by sight, and ask questions that you might not have gotten the chance to ask otherwise. This kind of session is probably only feasible for small conferences, but it was a good ice-breaker for the party afterwards. In the end, after attending the conference I haven't been converted to becoming a sociolinguist, but I am quite fond of their conferences : D
Monday, April 21, 2008
According to Wikipedia, grizzly bears speak English:
My Question: Are grizzly bears bilingual?
I ask this because the article doesn't just state that they speak English, it states that they "also speak english." The problem is that there's no previously mentioned language. They speak English in addition to Inuktitut? French? Grizzly?
Now, it could be the case that "also" is contrastively picking out "grizzly bears" among other speakers of English, as opposed to contrastively picking out English among the languages grizzly bears speak, but since this is an article on Grizzly bears, I don't think this is what the vandaliser had in mind.
Another possible option is that the predicate "speak English" is being contrastively picked out, among the other things that are normally predicated of grizzly bears, such as "being listed as threatened in the contiguous United States, and endangered in parts of Canada." This is probably the most likely, but I've copied and pasted the paragraph for your analyzing pleasure, in case the Wikipedia article gets fixed, erm, I mean edited, before you get a chance to see it.
Moral of the story: Everything you find on the Wikipedia is always true.
The grizzly bear is listed as threatened in the contiguous United States, and endangered in parts of Canada. In May 2002, the Canadian Species at Risk Act listed the Prairie population (Alberta, Saskatchewan and Manitoba range) of grizzly bears as being extirpated in Canada. In Alaska and parts of Canada however, the grizzly is still legally shot for sport by hunters. On January 9, 2006, the US Fish and Wildlife Service proposed to remove Yellowstone grizzlies from the list of threatened and protected species. In March 2007, the U.S. Fish and Wildlife Service "de-listed"the population, effectively removing Endangered Species Act protections for grizzlies in the Yellowstone National Park area. Grizzly bears also speak english.
FURTHER NOTE: Ursus arctos horribilis are also awesome, according to the sidebar.
Monday, February 11, 2008
Do not, for the love of everything you hold dear, use the pitch tracker for this purpose! You will get results like, for a brief example, the following*:
(Click for larger version.)
Do you see the problem? If not, try looking again:
I'm pretty sure my f0 did not, in fact, sharply drop to 0Hz and, just as sharply, return to 166Hz. And, though I'd like to think that I had superhuman laryngeal muscles, I'm pretty sure this dramatic, nigh-instantaneous change is physiologically improbable. This is but one of the myriad ways that the Praat pitch tracker will fail spectacularly to give you adequate results. "All right, Aislin," you may be thinking, "I understand that Praat's pitch tracker is epic fail. But how else am I to get my sweet, succulent f0 data?" Well, fear not! Calculating f0 your own self is dead easy, and I'm going to gently guide you.**
For those with less of an acoustic background, here's the short version of how it works: when your vocal folds vibrate, they create a complex periodic sound wave, the sum of all the harmonics. The fundamental frequency is the same as the first harmonic (h1), and all the harmonics are multiples of the fundamental. This means that the distance between any two harmonics is also equal to the fundamental. Because of this, the quickest and easiest way to calculate f0 involves measuring the harmonics.
To see the harmonics, we're going to use narrowband spectral data. Narrowband gives you worse time resolution, but better frequency resolution, so you can make out the individual harmonics better. Under Spectrogram settings, you can set the window length to 0.03s, and that should do. It'll look something like this:
Notice how you can immediately see that the pitch tracker's display contradicts the spectral data. For this reason, if you must use a pitch tracker display, it's best to show it over a narrowband spectrogram to demonstrate that it's not doing anything heinous.
Now that you've got a narrowband spectrogram, you need to take a slice of it using the Spectrogram menu. You'll see evenly-spaced bumps, each of which is a harmonic. If you see one right at the left edge, though, be careful! That's most likely an artifact of an imperfect mic. If you measure the space between that first bump and the next, you'll find that it's shorter than the spaces between the rest. Also, adult female speakers like myself generally have an f0 of around 200Hz, so the fact that this bump's peak occurs at ~30Hz is a big tipoff.
You want to pick a higher harmonic, maybe h10, and look at the smallest possible chunk of the spectrum that shows you this harmonic. Both of these cut down on measurement error, making your reading as accurate as possible! Place your cursor at the very tippy-top of your selected harmonic, and Praat will tell you what the frequency is. Once you have the value of your whateverth harmonic, divide it by whatever to get the h1. (i.e., divide the value for h10 by 10, h11 by 11, h9 by 9...) Remember that h1 equals the f0, so you are done. In summation:
Anyone who wants to use your data down the line will thank you.
*This represents two tokens of myself saying 'high'.
**If your training has included much acoustic phonetics, you're probably rolling your eyes right now. Sorry, but my recent experiences with tonal data tell me this PSA is desperately needed.
Sunday, January 27, 2008
The person doing the routine , Taylor Mali, presents himself as having issues with the lack of certainty conveyed by utterances tagged with elements like "you know," or with a question intonation. I wonder if this stand-up routine would fly in front of native speakers of Cree and Quechua, where utterances with evidential/epistemic interpretations are more common than outright assertions?
Not sure what else to say when one of my favourite procrastination methods (Stumble-Upon) leads me to a YouTube video that is directly relevant to my research. Maybe the research gods are trying to tell me that I've been procrastinating too much and should get back to work...
 I've looked him up a bit - I thought he was a comedian, but it turns out he's a teacher and "slam poet" - so maybe it isn't a stand-up routine, but a poetry slam.