Friday, December 28, 2007

Some Good News on Species Preservation

The Association of Zoos and Aquariums (an organization mostly in the news at the moment in relation to the Christmas tragedy at the San Francisco Zoo) has released its “Top Ten Wildlife Conservation Success Stories in 2007.”

Two of the stories were of particular interest to me. One caught my attention as an anthropologist, as it involves attempts to save a primate species, the black and white ruffed lemur, while a second was of special interest to me as it involved a species inhabiting in the wild only a single barrier island off Pensacola where I live and teach, the Perdido Key Beach Mouse.

The following paragraphs are from the AZA press release:

Black-and-white ruffed lemurs born in zoos are getting a feel for their new home at the Betampona Natural Reserve in eastern Madagascar. The Madagascar Fauna Group (MFG), and the Duke Lemur Center coordinated the plan to reintroduce zoo-bred lemurs to the wild, with the help of other MFG partners and institutions, including Salt Lake City's Hogle Zoo, the Los Angeles Zoo and the Santa Ana Zoo. The released individuals are being monitored and have fared well so far, with four offspring born from three reintroduced lemurs.

This summer, Santa Fe Community College Teaching Zoo, in Gainesville, Florida, began housing 52 Perdido Key beach mice to protect the species from extinction. The mice originated from the University of South Carolina, but needed to be relocated after damage from Hurricane Ivan. The Brevard Zoo, Florida Aquarium, and Palm Beach Zoo have since shared in the responsibility of caring for and studying the mice. There are only a few hundred individuals left in the wild, inhabiting just one barrier island off the coast of Pensacola. Researchers fear that a hurricane could be disastrous to the beach mice, potentially causing the species to become extinct in the wild. Breeding studies have commenced to safeguard their numbers.

Saturday, December 22, 2007

Russia and Separatism in Europe

This morning I ran across an interesting article put out by the Russian News and Information Agency, “Hotbeds of Separatism in Modern Europe.”

It’s an interesting article for two reasons. First, it provides an interesting read as a catalogue of separatist sentiment and movements across Europe (though greater discussion of degree of seriousness or importance of separatism in each case would have been useful – but see discussion on subtext below), including the Basque region in Spain and France, Catalonia and Valencia, Corsica and Bretagne, Northern Italy, Belgium, the Faeroe Islands, the Swiss canton of Yura, Vojvodina, and Romanian Transylvania.

Perhaps more interesting are the areas not discussed. Kosovo isn’t particularly discussed in the article, though it is a primary motivation for the article – see below. In an article that delves into separatist politics in the Faeroe Islands or Swiss cantons, it’s striking, but perhaps not surprising, that none of the areas in which Russia supports separatist movements or governments, i.e. Trandsniestra, Abkhazia, South Ossetia, are mentioned, nor are any of Russia’s separatist regions, most obviously Chechnya. (Technically, depending on where you want to draw the arbitrary line between Europe and Asia, Abkhazia, South Ossetia, and Chechnya might be out of Europe, but Trandsniestra is in Europe by any remotely conventional definition.)

The article has an overall editorial agenda that’s pretty clearly stated in the first paragraph:

“The Kosovo issue has been forwarded to the UN Security Council. The Russian Foreign Ministry suggests that Belgrade and Pristina should have another chance to come to terms. A decision on Kosovo's cessation from Serbia will create a precedent and violate international law.”

In addition to this brief editorializing against Kosovar independence, the subtext involved in cataloguing such a broad range of separatisms (except those involving Russia), as interesting as it is in its own right, seems to be a warning that much of Europe is only in need of the precedent of Kosovo for Spain, France, Belgium, the UK, Italy, etc., to come flying apart.

Thursday, December 20, 2007

Two Items on Bolivia

For those following the news from Bolivia, you know that recently there has been heightened political tension within the country. This has to do with constitutional reforms associated with President Evo Morales. The tensions map onto a longstanding social and geographic divide between the mostly poor highland west, where most of the country’s population resides, and the lowland east, associated with agricultural production, the country’s oil and gas resources, and a small wealthy elite. The constitutional reforms would result in more wealth redistributed to the west, with many in the east calling for greater autonomy, or even independence, for the lowland eastern provinces. Simon Romero has published a good overview of the situation in the International Herald Tribune, “Little Middle Ground in Country of Extremes.”

On a related note is a post I recently encountered on the blog “Two Weeks Notice,” written by Greg Weeks, “Thoughts on Democratators.” Weeks addresses a term, “democratator” (and an ugly neologism it is), combining “democracy” with “dictator,” that has been used by some media commentators to imply that some popularly elected leaders (and especially Venezuela’s Hugo Chávez, Bolivia’s Evo Morales, and Ecuador’s Rafael Correa), once elected, act as de facto dictators. Weeks’ point is not to suggest that neither Chávez nor Morales nor Correa are lacking in authoritarian tendencies, but instead to go on to address a larger point, to point out the problematic tendency in much media commentary to conflate all variety of “leftists” and even to conflate all manner of leaders with authoritarian tendencies as if they are the same. Weeks writes, “No matter what you think of Correa, he is not Musharraf. Nor is Chávez the same as Hosni Mubarak.”

Frankly, Chávez probably contributes to this tendency through his cultivation of close ties not just with Cuba’s Fidel Castro (which makes sense) but also with Iran’s Mahmoud Ahmedinejad and Belarus’ Alexander Lukashenko. I would also give credit to the writers of most of the news stories and commentaries I’ve read recently pertaining to Latin America for increasingly differentiating between “leftists” of the Chávez/Morales/Correa variety and “leftists” like Brazil’s Luiz Inacio Lula da Silva or Chile’s Michelle Bachelet. When Lula and Bachelet first rose to prominence, they too were often associated if not conflated with Chávez, where now they are increasingly presented as “good” or “responsible” leftists to the bad leftism of Chávez, Morales, and Correa.

Tuesday, December 18, 2007

Leonard Bernstein and Meaning in Music

Leonard Bernstein has several pop culture faces. To some, including myself, who grew up in the 1980s, he was first off a name shouted out in an R.E.M. song, perhaps followed by the question, “Who the hell is Leonard Bernstein?” (I wonder how much of my liking of Bernstein’s music might be attributable to positive associations with the R.E.M. song.) To some (not mutually exclusive with the first group), he was an important mid-20th century American composer who bridged a gap between popular music and entertainment and the Western “high” art music tradition. To some, he was one of the greatest and/or most important conductors of the 20th century. He was also an important mid-century music educator, especially through the public television series of “Young People’s Concerts” he conducted with the New York Philharmonic.

I recently watched one of these “Young People’s Concerts” on DVD that focused on the theme of meaning in music, with Bernstein talking to the children in attendance at Carnegie Hall in between musical examples.

The issue of meaning in music is difficult. Music is capable of meaning – it affects us, which is the result of a semiotic experience, but what is communicated and what the effect of music is is not directly translateable into linguistic meaning. (Food and taste generally, as well as smells, present similar situations. Foods and smells are meaningful not just because of symbolic associations we might have with them, e.g. the Thanksgiving Turkey or the smell of a rose, but also because of the associations with the direct physical experiences of eating or smelling.)

Bernstein’s basic argument is something I agree with – the meaning of music, however hard it may be to define (precisely because it is non-linguistic) is intrinsic to the music and does not derive from anything extrinsic to it, such as a story or title associated with a piece. He argues that while we might associate stories or titles with music, such associations are essentially arbitrary.

He uses the example of Beethoven’s “Pastoral” symphony, specifically the movement titled “By the Brook.” Bernstein agrees that the music is capable of evoking a mental image of a gently babbling brook, but argues that the music could equally evoke “Swaying in a hammock” if differently titled. I agree, even if I find Beethoven’s “Backyard” symphony with its “Swaying in a Hammock” movement amusing but difficult to imagine having been written, but also immediately reacted that the music could not evoke “Riding on a train” or “Falling off a cliff.” Those titles and mental images just wouldn’t fit the music.

He gives another example using the “Great Gate of Kiev” movement of Mussorgsky’s “Pictures at an Exhibition.” He argues that the “strong chords” of the music fit that image, but could equally fit the flowing of the Mississippi river. In saying so, he’s almost making an argument that there is a necessary iconicity between musical elements and any non-musical elements potentially evoked by the music, but then undermines this by insisting that there’s no real connection between music and image. I agree that the “Great Gate of Kiev” music could evoke the Mississippi River, but I can’t imagine it evoking “By the Brook,” much less something like “Mowing the Lawn.”

The association between music and extra-musical meaning (if any) is arbitrary in the sense that any given piece of music could potentially be associated with a variety of images. “By the Brook” could evoke “Swaying in a hammock.” But association of music and extra-musical meaning is not purely arbitrary – the range of potential associations is defined in part by the range of phenomena that share some iconic relationship with one another, that is that have some clear and systematic relationship of similarity with one another.

Monday, December 17, 2007

Russia, China, and a New "Great Game"

The term “Great Game” usually refers to the 19th century contest for influence in Central Asia between Russia and the British Empire. A recent article, “New ‘Great Game’ for Central Asia Riches,” provides a good overview of the current contest for influence in Central Asia by outside powers.

As the article makes clear, after September 11, 2001, the U.S. became heavily invested in the region, though has now been relegated to a more marginal player. This is partly due to waning interest on the part of the U.S. government, and partly because of the heavy initiative and investment in the region shown by China and Russia, now the two main outside influences in the region.

China in particular has substantially increased its investment in the region, with this also helping fuel the economic development of western China, with the China-Kazakhstan border coming to resemble the U.S.-Mexico border as one of the few international borders where one much more developed country shares a long border with another much poorer and less developed country, and with investment from the richer country fueling asymmetrical but cross-border development.

Sunday, December 16, 2007

Food and Biofuels

The world is currently experiencing tremendous inflation in food prices. As a report in a recent issue of The Economist (December 8, pp. 81 – 83) argues, there are two major causes of this global food inflation (not to deny the potential for other factors as well – and see my note below on the contribution of oil prices to food inflation).

One of these contributing factors is actually a side effect of a positive development. The level of affluence has risen dramatically in China and India and some other developing nations in recent years. As in already developed countries, affluence has some negative consequences, e.g. greater environmental impact from higher per capita energy consumption. Higher affluence has also led to a boom in meat eating in China and India – The Economist reports that meat-eating in China went from 20 kg of meat per capita per year in 1985 to more than 50 kg per capita per year now. More meat equals more grain grown for feed equals (unless tremendous, even stupendous, quantities of land were put into grain production – causing a whole new set of ecological problems) higher prices for grain.

The second major cause of current global food inflation is the diversion of enormous amounts of grain, especially maize, to subsidized biofuel production in places like the U.S. This has resulted in an increase in maize prices, which alone contributes to food inflation, but with the further result that many farmers have switched from cultivating other grains to maize, much for biofuel purposes, further contributing to food inflation.

An article, “Biofuels: Danger or New Opportunity for Africa?,” makes clear that the problem (to the extent that food inflation is a problem – The Economist report argues that with increased food prices, some farmers, including some in the developing world, will benefit, depending on how food inflation is managed by governments) is not the use of biofuels per se.

The “Biofuels” news article reports on a conference on biofuel and food held in Ouagadougou, Burkina Faso, where a number of perspectives on biofuels were presented. Many voices call for cautious development of biofuel production in Burkina Faso and other African nations.

Within this framework of caution, some individuals expressed hope for biofuel development in Africa for a variety of reasons. (1) In non-oil-producing countries, like Burkina Faso, biofuels could potentially provide a lower price source of fuel than oil imports, given the current astronomical price of oil. (It seems clear to me, and I was surprised that the report in The Economist didn’t deal with this, that global oil prices are a major contributor to food inflation in two ways: [a] increased transportation cost due to higher oil prices adds to the cost of all commodities; [b] the high price of oil is the main spur for biofuel development.) (2) Biofuel and food aren’t mutually exclusive. For example, biofuel byproducts can still be used for feed for livestock or for fertilizer. Further, biofuel need not be produced strictly from edible grains. Brazil’s sugar cane (edible, but not a grain) provides a far more efficient source for biofuel production than North America’s maize, and for countries like Burkina Faso, biofuel might be best produced from non-edible plants grown on land less well suited for direct food production purposes. (3) Biofuels don’t have to fuel everything in order to be useful – they can be used strategically. For example, in poor countries, diverting small proportions of crops to biofuel production specifically to fuel tractors and other agricultural equipment could be a way to simultaneously increase the scale of production and have agricultural production fuel itself.

Again, the problem isn’t biofuels per se, but the diversion of large portions of the world’s food supply (especially North American maize) into fuel production in a context of trade and other policies that stymies more efficient and sensible biofuel production.

Wednesday, December 12, 2007

Mixed News on Children’s Food Preferences

I recently encountered an interesting article on Medical News Today about research conducted by Kent State University scholars about children’s food preferences, “Strawberries, Watermelon, Grapes, Oh My! Study Finds Students Will Opt For Healthy Foods In The Lunch Line.” Despite the upbeat title, I find the news reported hopeful but mixed from the standpoint of healthy nutrition choices.

The fact that children rank fruits among their favorite foods is encouraging. This is balanced, though by the inclusion of preference for pizza and fast-food-style choices as also among their favorites. I’m also more ambivalent than the article’s author in seeing something like “string cheese” as a healthy food. At the same time, it is encouraging to hear that even as they offer lunch options of pizza and fast food style choices, more school districts are offering healthier versions of these items than in the past.

On a last note, while the researchers attribute preferences such as pizza, French fries, or chicken nuggets to cultural influence, I would tend to argue that preferences for things like fruits or for such fast food fare are all mediated by a combination of evolutionarily selected biological factors and cultural influences. A taste for certain food qualities, such as sweetness, the taste and texture of fats or proteins, saltiness, etc., seem to be a part of our evolutionary heritage, with this part of the reason that children (or adults) find fruits or chicken nuggets tasty. Patterns of consuming and acquiring a preference for specific food items are clearly also shaped by cultural context, though the precise influences shaping children’s desires for grapes or pizza differ.

The following is a selection from the article:

“Strawberries, grapes, and yogurt are just some of the healthier food items children prefer, researchers argue in a new study released this week. Kent State University researchers surveyed 1,818 students in grades 3 through 12, asking them what their favorite foods were. The study, included in the Winter 2007 issue of the Journal of Child Nutrition & Management, found that items such as strawberries, watermelon, white milk, and string cheese ranked among the "Top 20" foods, demonstrating that children will eat fruits, vegetables, and dairy products. “The researchers also found differences in taste between grade levels. Elementary school students were more likely to rank fruits much higher than older children, while "fast and familiar" foods such as chicken nuggets and hamburgers were less preferred by middle school and high school students.
“Although healthy items made the "Top 20" list, children still consider pizza, French fries, and chicken nuggets among their favorite foods. The researchers attribute this to the influence of culture on students. On average, approximately 30% of students consume fast food on any given day, making it more likely that students will eat these foods at school. To accommodate their tastes, school nutrition professionals offer these items, but use healthier ingredients such as whole grains, low-fat cheese, and lean meats and prepare the foods with healthier cooking techniques such as baking.
"School foodservice professionals and dietitians have been promoting the consumption of a wide variety of foods for a healthy diet," concluded researchers Natalie Caine-Bish, PhD, RD, LD and Barbara Scheule, PhD, RD. "Menu planners should consider the inclusion of these selections (favorite foods) in their menus as means to improve nutritional quality as well as satisfaction."

Tuesday, December 11, 2007

B.J. and the Bear, Coolness and Essentiality

I present here a representation of a chain of associations that occurred to me recently. It’s not anything like stream of consciousness writing, but perhaps a representation of a stream of consciousness.

Lately, my partner and I have been truly enjoying the VH1 “reality” show “America’s Most Smartest Model.” It’s mindless entertainment, but unlike all but a handful of other television shows, it actually is entertaining, even if we can’t figure out exactly what it is about the show that makes it so while other shows just seem bad.

In any case, most television is just plain bad. That’s something that most everyone I know agrees with. We may disagree on which are the few shows that are entertaining or have some redeeming qualities, but most everyone can agree that most television shows are not worth watching.

But television has always been bad. Take, for example, the late 1970s program “B.J. and the Bear.” I was quite fond of the show at the time, but I have the excuse that I was eight or so years old when it first came on. Looking back, I wonder how such a show was ever made; I wonder who ever thought it was a good idea.

It’s a show about a man and a chimpanzee who drive an 18-wheeler (painted in the same red-with-an-angular-splash-of-white color scheme as the car in “Starsky and Hutch”) around the country and get into adventures. Just to confuse matters, the chimpanzee is named “The Bear.” And when ratings eventually flagged, the “Seven Lady Truckers” were waiting in the wings.

Actually, the existence of “B.J. and the Bear” is not so mysterious – it’s the product of the convergence of two of the more improbable pop culture phenomena, the man-ape buddy show (see also the highly successful and slightly earlier Clint Eastwood film Every Which Way But Loose, co-starring the orangutan Clyde) and trucker-chic.

As many might remember, there was a period of time in the late 1970s when truckers were in, e.g. the success of the Smokey and the Bandit movies, or Convoy.

Truckers may be many things. Most are honest and hard working – certainly anyone who manages to make a decent living driving trucks works hard.

Even more, truckers are essential. In any modernized society, we’d starve to death without truckers.

One thing truckers are not, though, and which makes the late ‘70s trucker-chic phenomenon so inexplicable, is cool. (I’ll grant that the combination of two components of American ideology were behind the trucker-chic thing – the allure of the open landscape and open road, and the idea of making one’s way in the world through one’s own individual labor. I can see where “trucking” could be almost cool, but I’ve also been to enough truck stops to see that truckers are not – with that not in any way intended as a slight. Again, truckers are essential. Further, some individual people who are truckers may be cool, but their coolness is separate from their “trucker-ness.”)

There are many other people who also perform occupations that are essential, at least essential to the functioning of modern society, e.g. sanitation workers, secretaries, factory workers, bus drivers, etc. One thing these essential occupations have in common is their lack of coolness.

(I’m resisting precisely defining coolness here; perhaps I’ll do that at some later point. What sorts of things or occupations are potentially cool? Some examples often thought cool: musicians, especially in some genres; some types of writers and artists; athletes; clothing styles associated with youth and/or social detachment.)

I suspect there are many qualities to coolness, but I’ll conjecture here that one component of coolness (in the sense of “hipness,” as opposed to the sort of coolness of being “cool under pressure”) is inessentiality.

Occupations, activities, or things that are cool are in some way inessential, even superfluous (though not to say useless, for some use can be found for anything).

The reverse doesn’t hold so clearly, though. That is, inessentiality doesn’t make you cool. (Put another way, inessentiality is a necessary but not sufficient condition for coolness.) Academics and scholarly types are generally neither essential nor cool. Jazz musicians are cool (or at least were in days when jazz was associated with youth dance halls in the swing era or with dank clubs in seedy parts of town in the bebop and hard bop periods – nowadays, with highly professionalized musicians often playing jazz as repertory [not necessarily bad things] which is increasingly thought of, like classical music, as music to be edified by, jazz musicians are less clearly cool.) Jazz critics, no matter how interesting their musings, are not cool. (Like truckers, some individual critics might be cool, but their coolness relates to personal factors other than their “critic-ness.”)

Sunday, December 9, 2007

Hope on Slavery

It’s rare to encounter encouraging news about contemporary slavery. Wherever unpaid forced labor arrangements occur, whether in Mauritania or the U.S., they usually occur as part of informal sectors of society and the economy that are difficult to observe, and with limited enforcement of laws and policies for a variety of reasons. The article “Mauritania: The Real Beginning of the End of Slavery?,” from AllAfrica.com, offers at least the hope of real change on this issue in that national context.

The following is from the article:

“Four months after the passing of a law criminalising slavery in Mauritania, anti-slavery activists hope newly-announced funding for the reintegration of former slaves will address the many problems they continue to face in Mauritanian society.
"Quite obviously, we're very pleased with the announcement," said Biram Ould Dah Ould Abeid, member of the anti-slavery organisation SOS Esclaves, which has been leading the fight against slavery in Mauritania for years. "The government is sending slaves a strong signal and it is also proof that the authorities have heard our calls."
When slavery was criminalised in August, human rights and anti-slavery organisations urged the government - as they had been doing for years - to adopt accompanying measures for the law to be effective.
Officially abolished in 1981, slavery continues to be practiced in all Mauritanian communities, mostly in rural areas, by upper-class lighter-skinned Moors (Berber Arabs) as well as black Africans. One estimate by the Open Society Justice Initiative places the number of slaves and former slaves at 20 percent of the population - or about 500,000 people - but the numbers are difficult to confirm.”

Saturday, December 8, 2007

Karlheinz Stockhausen, 1928 - 2007

The important German composer Karlheinz Stockhausen has died. Stockhausen, especially with his works of the 1950s through the 1970s, was one of the more influential composers of the past few decades, influencing music across multiple genres, including contemporary classical or art music, jazz, electronic musics and sampling of all sorts, rock and pop.

The following is from the New York Times:

“In “Song of the Youths” (1956), he used a multichannel montage of electronic sound with a recorded singing voice to create an image of Shadrach, Meshach and Abednego staying alive in Nebuchadnezzar’s fiery furnace. In “Groups” (1957), he divided an orchestra into three ensembles that often played in different tempos and called to one another. (My inserted note: As with any creative and original person, the sorts of things Stockhausen did were not completely without precedent. Much of what he did is anticipated, albeit with a decidedly different flavor by the earlier 20th century American composer Charles Ives, e.g. the use of musical montage, or the division of orchestra into different ensembles playing at different tempos but relating to one another in his “Universe Symphony.”)
Such works answered the need felt in postwar Europe for reconstruction and logic, the logic to forestall any recurrence of war and genocide. They made Mr. Stockhausen a beacon to younger composers. Along with a few other musicians of his generation, notably Pierre Boulez and Luigi Nono, he had an enormous influence. Though performances of his works were never plentiful, his music was promoted by radio stations in Germany and abroad as well as by the record company Deutsche Grammophon, and he gave lectures all over the world.
By the 1960s his influence had reached rock musicians, and he was an international subject of acclaim and denigration.”

The following excerpts are from Bloomberg.com:

“Paul McCartney and John Lennon of the Beatles were Stockhausen fans, and the group honored the composer by using his image on the cover of its 1967 album, "Sergeant Pepper's Lonely Hearts Club Band.'' The single "Strawberry Fields Forever'' showed Stockhausen's influence.
He inspired some of the music by Frank Zappa, Pink Floyd, Miles Davis and Brian Eno. His groundbreaking electronic beats found echoes in long compositions by Can, Kraftwerk and Tangerine Dream in the 1970s. Of classical composers, Igor Stravinsky was an admirer, though not an uncritical one. Stockhausen's music was compared to Arnold Schoenberg and Oliver Messiaen before him. He went on with Pierre Boulez to offer a vision of the future.
Stockhausen was seen by some as the greatest German composer since Wagner. To others, his music was empty and devoid of merit. Conductor Thomas Beecham was asked, ``Have you heard any Stockhausen,'' and said, ``No, but I believe I have trodden in some.''

“His breakthrough came in 1956, with the release of ``Gesang der Junglinge'' (Song of the Youths), which combined electronic sounds with the human voice, the Guardian newspaper said.
In 1960, he released "Kontakte'' (Contacts), one of the first compositions to mix live instrumentation with prerecorded material.”


For more on Stockhausen, see “Composer Karlheinz Stockhausen is Dead” from Yahoo News, “Karlheinz Stockhausen, Composer, Dies at 79” from the New York Times, and “Karlheinz Stockhausen, Pivotal German Composer, Dies at Age 79” From Bloomberg.com. I recently wrote of Stockhausen, albeit briefly, in my post, “Mythic Music: Stockhausen, Davis and Macero, Dub, Hip Hop, and Lévi-Strauss.”

Wednesday, December 5, 2007

Comments on Ben Ratliff’s Coltrane

I’ve been reading and enjoying the recent book by Ben Ratliff, Coltrane: The Story of a Sound. I’m currently about halfway through it and have already found a number of interesting points and had several interesting conversations with my partner, Reginald Shepherd, prompted by quotations from the book or points made by Ratliff.

I was both amused and “thought-provoked” (we often speak of something provoking thought without really have a conventional passive form construction to accompany it – and it was this that I experienced – whereas when we speak of being provoked by something, the implication is generally that it is irritation, and not thought, that has been so provoked) by the following passages from Ratliff’s book describing John Coltrane’s earliest recording session, an amateur session from 1946 while he was in the navy in Hawaii, with Coltrane alongside a few members of a navy band, the Melody Masters, almost ten years before Coltrane rose to any kind of serious prominence (or promise) in jazz circles. Ratliff writes:

“One tune from that amateur session was Tadd Dameron’s ‘Hot House,’ a song that later became known as one of the great compositions of early bebop. ‘Hot House’ is a 32-bar song that first borrows from the chord changes of the standard ‘What Is This Thing Called Love?’ before cleverly altering them. And the seamen try an effortful replication of Dizzy Gillespie and Charlie Parker’s version of the tune, cut a year earlier – except that the navy trumpeter doesn’t solo, as Gillespie did.
“Instead, Coltrane does. In fact, Coltrane, on alto saxophone, takes the only solo – a hideous, squeaking, lurching thing. But perhaps it didn’t matter to the thoroughly preprofessional Melody Masters, because Coltrane had met Bird.
“Some jazz musicians are off and running at nineteen – Charlie Christian, Johnny Griffin, Art Pepper, Clifford Brown, Sarah Vaughn. John Coltrane was not.”

Ratliff is not out here to denigrate Coltrane. On the contrary, Ratliff clearly (and correctly) sees Coltrane as a seminal figure in jazz and music history who was a sort of genius. (One of the things I like and respect about this book is that it’s neither got an ax to grind against Coltrane or any of his contemporaries – it’s not the sort of work that sees Coltrane’s entire oeuvre as one big hideous, squeaking, lurching thing [see “Vitriol and Jazz”], not is it hagiography – he’s critical and doesn’t count every note to have exited Coltrane’s horn equally golden.)

What Ratliff does here instead is clarify what sort of artistic development Coltrane underwent. Far from being a prodigy who burst onto the scene, Coltrane practiced prodigiously and gradually and organically over a long period. Importantly, this continual development of his talent, skill, and expression never stopped until his death, and as Ratliff argues, the development in Coltrane’s music from 1957 until his early death in 1967 is unparalleled by any completely analogous set of developments over a similar period in the creative expression of any other jazz musician. (Frankly, I draw a blank when trying to come up with any artist in any genre with a ten year period quite like Coltrane in 1957-1967.)

What Ratliff’s discussion prompted me to think about is the nature of talent, genius, and creative expression. In contrasting Coltrane’s gradual and organic development over long stretches of time with the sort of musician who is “off and running at nineteen,” Ratliff delineates two creative types (two types of geniuses in the case of those whose talent is great) with regard to the process of acquiring or having talent, those like Clifford Brown whose talent bloomed quite early, and those, like Coltrane, who only very slowly matured and emerged as a talent of great note. (Brown and Coltrane are clearly extreme cases here, with most creative talents falling somewhere on a continuum in between. I also don’t intend at all to imply that Brown’s genius sprung from nothing, as it clearly came from a lot of hard work on his part, but there’s also plenty of evidence to indicate Coltrane practiced about as hard as it would be possible to practice for a very long period before his promise began to emerge.)

Something I was prompted to think about by Ratliff’s discussion, but which is not the thrust of his arguments is that there are different sorts of talent (and genius) in terms of one’s approach to creative expression. There are also talents for different sorts of things (e.g. musical talent, talent for visual art, talent for thinking mathematically or verbally, etc.), but what I have in mind here are approaches to creative expression and ways of acquiring talent for expression that cut across the particular fields of creative expression, though I’ll use jazz examples to illustrate.

Two sorts of talent, two approaches to creative expression (without making any claim that these are by any means the only two sorts) correspond at least roughly to Lévi-Strauss’ distinction between bricoleur and engineer, between “mythic” and “scientific” thinking. (See also “Mythic Music.”)

The work of Miles Davis and Coltrane can illustrate.

Davis worked largely through assemblage. Over the course of his career as band leader, the nature of the music put out by his band continually changed, often heading in unexpected directions. (While probably no one could have predicted late Coltrane music like that found on albums such as Interstellar Space or Live in Japan from 1957’s Blue Train, from album to album, period to period, there was near continuous development in a direction unpredictable from the start but nonetheless in a direction. Davis’ music sometimes moved in startling directions after band changes; something like Bitches Brew was probably not just unpredictable from ten years earlier, but from just a couple years earlier in Davis’ career.) This is related to the way in which Davis often related to his bands over the years, choosing musicians who were on the cusp of new developments who might take the music in new directions and allowing them remarkable free reign, often offering his musicians little guidance. This is not to suggest Davis had no vision for his work, but that the vision consisted of assembling pieces that could create unpredictable results. As I discussed in the “Mythic Music” post, during the late 1960s and early 1970s, he took this creation through assemblage a step further (in the studio that is, not live where this would have been impossible), having the band create recordings of material that was used solely as raw material for he and producer Teo Macero to assemble a musical bricolage from.

Coltrane was much more concerned with musical theory and implementing music that expressed his concerns with harmony, rhythm, etc. (not that Davis was unaware of theory, but Coltrane was especially concerned with this as a component of expression). This is also not to suggest that Coltrane’s music was some sort of pure expression of some abstract idea either, nor that the music came solely from him. Far from it. Like Davis, or any artist, Coltrane drew ideas from all around himself, but much more so than someone like Davis, whose expression was working in a different sort of way, he tended to thoroughly assimilate all those influences, incorporate it thoroughly into a distinct “Coltrane sound.”

Ratliff writes (p. 119):

“… one of the most useful and overriding ways to comprehend the arc of Coltrane’s work, one that contains significance for jazz now, is to notice how much he could use of what was going on around him in music. He was hawklike toward arrivals to his world, immediately curious about how they could serve his own ends, and how he could serve theirs. Every time a jazz musician drifted into New York and began impressing people, every time he encountered a musician with a particular technique, system, or theory, every time a new kind of foreign music was being listened to by others in the scene, Coltrane wanted to know about it; he absorbed the foreign bodies, and tried to find a place for them in his own music. He learned as much as he could of the life around him and behind him, and retained only what best suited him, such that you usually couldn’t tell what he had been drinking up.”

Coltrane’s approach seems a bit like Star Trek’s Borg, assimilating all, gleaning what is unique and useful, but remaining fundamentally the Borg – except that in Coltrane’s case, that’s a good thing.

Tuesday, December 4, 2007

Amartya Sen on a "Clash of Civilizations"

Asia Times has published an interesting interview with economist Amartya Sen, “A language for the World,” conducted by Sanjay Suri of Inter Press Service.

The following is an excerpt from the interview, specifically Sen’s response to a question about the now popular notion of a “clash of civilizations”:

IPS: So is the idea of a clash of civilizations misplaced?

AS: It's a wholly wrong expression. For at least three different reasons.

One, that these divisions of civilization are done on grounds of religion. But we don't have only religious and civilizational identity. When I talk with a Muslim friend, I happen to come from a Hindu background ... whether in India or in Pakistan or in Bangladesh, or for that matter in Egypt or Britain, it's not a relation between a Hindu civilization and a Muslim civilization. It could be two Indians chatting, or two sub-continentals chatting. Or two South Asians chatting, or it could be two people from developing countries chatting. There are all kinds of ways in which we have things in common. So the civilizational division is a very impoverished way of understanding human beings. In fact, classifying the world population into civilization and seeing them in that form is a very quick and efficient way of misunderstanding absolutely everybody in the world.

Second, as these cultures have grown, they have had huge connections with each other. Indian food drew the use of chilli from the Portuguese conquerors. British food is deeply influenced by Indian cooking today. Similarly maths and science and architecture travel between regions. So does literature. So, civilizations have not grown into self-contained little boxes.

The third mistake is to assume that somehow they must be at loggerheads with each other. It is just one division among many. And there are others; there are men and there are women. The gender division. Now if that leads to hostility between them, that will be a different thing. And then one has to see what kind of rhetoric has made that possible. And if there is lack of justice to women, how both men and women may have a joint commitment in overcoming that quality.

It's the totality of neglect of these issues; the multiplicity of identities, the non-insular interactive emergence of world civilization which is increasingly a united one, and the absence of the reason for a battle just when a classification exists, these are the ways in which the rhetoric of a clash of civilizations is not only mistaken, but is doing an enormous amount of harm today.

Tuesday, November 27, 2007

Humanitarian Crisis in North Kivu, DRC

Though not extensively covered in the Western media, the world’s deadliest armed conflict since WWII occurred in the Democratic Republic of Congo (DRC), with an estimated 4 million dead between 1998 and 2003. That conflict has simmered on in North Kivu (a Congolese province bordering Uganda and Rwanda), with full scale war threatening to break out once more between the official army of DRC and the dissident troops of General Laurent Nkunda, a conflict that could end up involving foreign troops as well.

Humanitarian crisis doesn’t loom so much as it is already present. This from a recent article in The Economist (“A humanitarian disaster unfolds,” November 17, p. 54): “Making comparisons between humanitarian crises may not always be fair or useful. But those dealing with the emergency in Kivu are starting to do so. ‘The situation at the moment in North Kivu is worse than Darfur,’ says Sylvie van den Wildenberg of the UN mission in the province. More people have fled their homes this year than in Darfur.” As the same article reports, approximately 500,000 (out of the province’s population of 4 million) people have been displaced in the past year or so, 160,000 just in the past two months. Violence is common, and rape is being commonly used as a weapon of war.

See “More Clashes in DRC North Kivu Will Harm Civilians,” from New Zealand’s Scoop, for a general description of the situation. See “The Blood Keeps Flowing,” from AllAfrica.com, for a description of the effects on one town.

Saturday, November 24, 2007

Aging and Culture

A story in a recent issue of the news commentary magazine The Week (November 16), “Mr. Immortality,” reports on the ideas and research of “maverick biologist” Aubrey de Grey. While some of de Grey’s ideas are pretty far outside the current mainstream (e.g. he thinks it possible for humans to routinely live for centuries, if not a millennium), his basic starting point is sensible – to treat the aging of human cells and body parts as the set of physiological processes that it is and to intervene medically in this process as we would with disease. In other words, de Grey doesn’t so much imagine a magical fountain of youth as much as the continual preservation of life through routine maintenance over very long stretches of time.

De Grey’s ideas are of anthropological interest in at least two ways. First, they call into question the naturalness of aging. Even a cursory awareness of cross-cultural ethnological data makes clear that the ways in which we age is no more purely natural than much else that humans do. Being a young or middle-aged or elder member of a society is strongly influenced by cultural context, and cultural patterns pertaining to physical activity or nutrition play important roles in the aging process as well. Still, in every society up until now, the fact that we age has been inescapable, and de Grey’s ideas potentially challenge this inevitability.

Second is to consider the potential social consequences if aging is no longer inevitable. De Grey imagines a number of consequences that are probably spot on. For example, a rise in risk aversion strategies – if you can live forever unless you die in a violent accident or incident, you’d probably take things easier (as a child reading Tolkien’s Middle Earth works, one thing I always had trouble accepting was elves – immortal unless physically killed – willingly throwing themselves into battle). He also imagines a rethinking of retirement. It’s one thing to retire in one’s mid-sixties when one expects realistically to live just a decade or two longer than that, quite another if one expects to live several centuries. (For that matter, a variety of factors are already gradually leading to an upward shifting of retirement age anyway, with probably the most important factors being the potential insolvency of social security, but also expectations of longer life – even though not on the scale imagined by de Grey.)

In other ways, I find de Grey’s predictions limited, in large part because he is a utopian. He clearly sees the drastic expansion of human lifespans as something extended to all. For example, when asked about the consequences of such longer life and anti-aging maintenance, he replies, “If we want to hit the high points, number one is, there will not be any frail elderly people.” I find this much harder to imagine than the possibility of humans living a thousand years. Barring a complete transformation of global political and economic realities (something that could always happen but which I don’t at all foresee), the more realistic possibility is an extreme exacerbation of social inequalities, both between the developed and developing worlds and within specific nation-state contexts, with inequality encompassing not just differences in material prosperity but lifespan. (This too would be an exacerbation of an already existing pattern. According to data from The Economist, fifteen states [or similar entities] have populations with average life expectancies in excess of 80 years [Andorra, Japan, Hong Kong, Iceland, Switzerland, Australia, Sweden, Canada, Macau, Israel, Italy, Norway, Spain, Cayman Islands, and France], while six, all in Sub-Saharan Africa, have populations with average life expectancies lower than 40 years [Swaziland, Botswana, Lesotho, Zimbabwe, Zambia, and Central African Republic].) I, of course, prefer De Grey’s imagined world, but I find it easier to imagine a small economic elite with access not just to fabulous wealth but also effective immortality, while in much of the world “frail elderly people” remain normal, and perhaps a middle group with partial access to greatly enhanced lifespan.

Thursday, November 8, 2007

Research, Teaching, and Music Performance

The other day I had a very nice conversation with a graduate student I work with. This particular student is just beginning field research for his thesis, a thesis which, in a nutshell, will address issues of booth rental and wage labor in hair salons, a topic that taps into debates in political economy going back at least to Ricardo and also rich with interesting ethnographic detail. This student, like a lot of, probably most, ethnographers is using a combination of participant observation and flexible, open-ended interviews.

He noted that he was pleased by how his first interviews had gone, also noting the highly flexible quality of the interviews, with interviewees often taking the conversation in interesting and unanticipated directions, but also that he felt confident in working in this highly flexible and even improvisatory setting because of a significant amount of preparation for his field work that he had engaged in along with me and other members of the committee.

I drew an analogy to certain aspects of teaching. Specifically, there is a performative quality to research methods like participant observation and flexible, open-ended interviewing that has something in common with the performative quality of some teaching, e.g. leading an effective class discussion. Effectively leading discussions requires preparation and organization – you have to know your stuff, but I find that the most effective discussions are true conversations that can often lead in unexpected directions. There is improvisation, but based on sufficient organization and preparation that I’m confident enough to set aside preset plans and follow an interesting lead. (This doesn’t mean that anything goes in class discussion – or open-ended interviewing – some comments are outside the domain of relevancy, are too tangential, and require reigning in, though it can sometimes be difficult to tell in the moment what is too tangential and what not.) Not all teaching works this way, though. Sometimes a thoroughly preplanned lecture is the best and most efficient way to communicate information to a class – there can always be room for questions and clarifications, but within a plan.

Then, another analogy struck me. Some research (in this case, participant observation and flexible interviewing strategies) and some teaching (e.g. leading class discussion) is analogous to jazz performance, while other research (e.g. more controlled interviewing or survey research) and other teaching (e.g. delivering a preplanned lecture) is more analogous to classical performance.

Jazz performance is highly improvisatory. When performed well, though, jazz is not chaos or noise, but based on thorough preparation and practice that allow a skilled musician to dispense with rigid adherence to formulae to play freely. The same is true with skillful performance of certain research and teaching strategies.

With some exceptions (typically highly delimited and occurring either in music from the baroque period or earlier or from very recent classical composition), classical performance is highly scripted rather than improvisatory. The musicians follow a definite score. Something like survey research tends to work similarly, with attention paid to following a scripted questionnaire and attempting to control as much about the research environment as possible so as to limit as far as possible the number of variables that might contribute to the production of the different question responses.

In both cases here, classical performance and survey research, though, even within the highly scripted context, there is nuance and interpretation to performance. Different performances of the same classical works can sound quite different based on subtle differences in interpretation and performance of the music’s details, producing highly different results. With something like survey research, there is an art to getting people to respond to questions, and doing so without either inhibiting or overly influencing respondents’ replies through the details of posture, facial expression, or a wide variety of vocal qualities. (As an aside, the film Kinsey presents several examples of such things to be avoided by interviewers in a formal research setting. In the film, we learn about Alfred Kinsey as a person via several scenes in which he trains students in interview techniques by having them interview him. It’s an innovative way of delivering exposition about the subject’s life in a biographical film without slipping into the clichés of biopics. Along the way, it’s the only movie I’ve ever encountered that seriously explores social science research methods.)

Monday, November 5, 2007

Mythic Music: Stockhausen, Davis and Macero, Dub, Hip Hop, and Lévi-Strauss

It’s not particularly news to say that much contemporary music, popular or otherwise, is constructed through assemblage, put together from pre-existing pieces in what Lévi-Strauss called bricolage (and which he associated especially with mythic rather than scientific thinking) – creating something new out of assorted odds and ends of things already there. This is especially clear with hip hop and its heavy use of sampling previously existing music and sounds, though the use of sampling and re-mixing is not confined to that genre.

To say this is to neither praise nor criticize – it is simply to make a comment on a key quality of much if not most contemporary music. Such musical bricolage can be highly creative (to pick just one example I’m fond of, System of a Down frontman Serj Tankian’s “Bird of Paradise (Gone)” from Bird Up – the Charlie Parker Remix Project uses Parker’s “Bird of Paradise” and other musical odds and ends as source material for something that’s really less a remix than a truly new piece of music), tedious (with many hip hop and pop songs, the most interesting thing is trying to remember which previous bland pop song it is that’s being so obviously sampled), and/or an attempt by record labels to cash in on back catalogue material with remix projects (the Bird Up album I mention above is overall pretty good – but it’s also a crass attempt by Savoy Jazz to make more money from a catalogue that’s been marketed many times over).

Musical bricolage didn’t start with hip hop. One of the key antecedents of remixing and sampling in hip hop is Dub, which in the 1970s essentially involved reformulating the elements, i.e. early remixing, of reggae songs.

One of the earliest instances of music produced through bricolage in a popular genre was the work of Miles Davis and producer Teo Macero on albums like Bitches Brew and A Tribute to Jack Johnson. What they did on these albums in the late 1960s and very early 1970s was, of course, not completely unprecedented. Structurally, what they did was anticipated by the composer Karlheinz Stockhausen – an influence Davis explicitly acknowledged at the time.

What made their work at the time quite different from most everything else done in jazz up until that point was the way in which the final songs appearing on the albums were constructed from multiple takes of different tracks recorded in the studio (as opposed to the standard jazz practice of releasing whole takes, even if multiple takes of a song were recorded, with the best take being the one released).

Even before this, there had been much use of overdubbing in the production of pop and rock recording. Also, in classical music there had been instances of taped material being incorporated alongside conventional instruments in the performance of a musical work. What Stockhausen and Davis and Macero were doing was structurally a bit different.

Conventional overdubbing allows for a finished recording to be constructed from elements recorded in separate instances. However, this isn’t bricolage. The piece of music is pre-planned, a structure is designed and then carried out – i.e. this is an instance of “engineering” (to invoke Lévi-Strauss’ contrast between the engineer/scientific thought and the bricoleur/mythic thought). Overdubbing simply allows a designed structure to be implemented by breaking a task down into constituent parts (a classic “scientific” maneuver) before putting each in its proper place. Earlier classical pieces that incorporated taped material tended to be of the same sort of “engineered” music.

What was different with Davis recordings beginning in the late 1960s was that the tracks that were recorded were not constituent parts of a designed piece. Instead they were freely improvised works in their own right that were recorded with the sole intent of serving as raw material (something that has by no means kept Columbia records from cashing in on all these recordings by releasing them recently in a series of massive box sets – and frankly, much of the material is well worth listening to in its own right, even if it was never intended for release as is), as previously existing odds and ends out of which finished songs were constructed out of bits and pieces from here and there in a true process of bricolage. (If one wanted to qualify, this could be called engineered bricolage, insofar as the oddments for assembly were themselves intentionally designed to serve as such, unlike the found odds and ends of dub producers or more recent remixers.)

There are numerous partial examples of musical bricolage from earlier periods. That’s essentially what musical quotation is, but such wholesale bricolage, where entire works are constructed of previously existing material is fairly new in the history of Western music.

In a variety of his works, Lévi-Strauss drew parallels between the structure of myth and music. One parallel is the co-dependence of the synchronic and diachronic in both myth and music. Myth narratives and musical pieces unfold through time, and without this diachronic element, there is no narrative, whether mythic or musical, but all the while, the experience of the unfolding chain of events is filtered through synchronic structure – there is not simply a random unfolding of events, but things happening in relation to what has happened prior and expectations of what will happen now and in the future, without which there is only noise.

At the same time, Lévi-Strauss strongly associated mythic thought with bricolage. Mythic thinking involves understanding the world through taking the already there and reassembling it. (He was also rightly aware that even at our most “scientific,” we never impose structure on the world without constraint or without precedent.) But here (until recently, at least) a full parallel with music breaks down. For several centuries, western music, especially western art music, worked in an engineering mode. For example, think about the sometimes mechanistically imposed structure of canon or sonata form, or later serialism.

In Myth and Meaning, Lévi-Strauss made an interesting conjecture. He noted that western art music rose to prominence at roughly the same time that mythic thinking was more and more giving way to scientific thinking in scholarship and western discourse generally. He conjectured that some of the organization of experience typical of mythic thinking was transposed onto thinking through music with its new prominence.

Regardless of the value of that conjecture (I’m not sure how to go about proving it one way or another), I think it’s important to note that music and myth are structurally similar in some ways (e.g. the organization of the experience of time), but until recently, the quality of bricolage so typical of myth has not been characteristic of music. What’s new about Stockhausen, Davis’ and Macero’s experiments in the late 1960s and 1970s, dub, and hip hop is the creation of music in a fully mythic mode.

Friday, November 2, 2007

An Appreciation of Dizzy Gillespie

I just ran across an interesting appreciation of Dizzy Gillespie (on what would have been his 90th birthday) by Doug Levine in Contacto magazine. I encountered it serendipitously: I was doing a news search for articles on the Middle East, including Tunisia, and this article popped up because of its mention of the Gillespie song “A Night in Tunisia.”

For what it’s worth, I’d like to add my own appreciation of Gillespie. He’s certainly not a forgotten or unappreciated figure in the history of jazz or western music in general – with his chipmunk cheeks and distinctive 45 degree trumpet bell, his is one of the most recognizable images in jazz history.

Still, I think an argument could be made that his significance has been underappreciated, and that he’s been taken a bit less seriously than some of his contemporaries.

He was an important jazz innovator, particularly for his contributions to the creation of bebop in the 1940s and Afro-Cuban jazz in the 1950s, though here his reputation is often overshadowed by that of bebop co-creator Charlie Parker or later innovators like Miles Davis and John Coltrane. He was important in maintaining the vitality of the jazz big band in the 1950s, though here he’s often overshadowed by Duke Ellington, who continued to be the biggest name in big band, or the collaborations between Davis and Gil Evans. He was an important jazz songwriter, though here often overshadowed again by Ellington, but also Thelonious Monk, Charles Mingus, and others. Where he’s gotten the most due credit is with regard to his individual virtuosity on the trumpet (other names may be mentioned as equals here, but rarely have I encountered arguments to the effect that so-and-so was a more virtuosic talent than Gillespie) and as a popularizer and ambassador for the music.

What’s most amazing about Gillespie is that he was all these things at once and at the height of his career – an important innovator, band leader, songwriter, virtuosic soloist, and popularizer and good will ambassador for jazz.

What his career lacked was a touch of the legendary or a heavy dose of pathos – and it does seem that jazz legends are supposed to be tragic figures. While the quality of their music speaks for itself and is in little need of elaboration, Parker, Davis, or Coltrane are jazz legends in large part because of the narratives associated with them, the personal battles of each with drug addiction, the too early deaths of Parker and Coltrane, the at-times prickly personality of Davis, etc. Gillespie was, as far as I can tell, a universally loved figure, but given a general lack of pathos and the tragic in his public personal narrative, alongside his stage persona as affable (and admittedly at times corny) entertainer, he’s treated less seriously by many jazz fans than Parker, Davis, Coltrane, and others.

Thursday, November 1, 2007

Washoe

The chimpanzee Washoe has died. Probably one of the most famous non-human individuals, Washoe, along with several other individual apes, played an important role in ape language and communication research.

Washoe learned around 250 American Sign Language word signs. Though there is debate about the extent to which Washoe could be regarded as using language, the research involving her helped clarify commonalities and difference in human and chimpanzee communication, as well as the qualities of chimpanzee cognition.

An article about Washoe can be found here.

Wednesday, October 31, 2007

Beringia and Human Migration to the Americas

Anyone interested in the ongoing debates about human migration into the Americas may want to take a look at a recent news article at Science Daily, New Ideas About Human Migration From Asia To Americas. The article reports on a recent study by Ripan Malhi and colleagues at the University of Illinois published in the Public Library of Science.

The following is from the Science Daily article:

“What puzzled them originally was the disconnect between recent archaeological datings. New evidence places Homo sapiens at the Yana Rhinoceros Horn Site in Siberia – as likely a departure point for the migrants as any in the region – as early as 30,000 years before the present, but the earliest archaeological site at the southern end of South America is dated to only 15,000 years ago.
“These archaeological dates suggested two likely scenarios,” the authors wrote: Either the ancestors of Native Americans peopled Beringia before the Last Glacial Maximum, but remained locally isolated – likely because of ecological barriers – until entering the Americas 15,000 years before the present (the Beringian incubation model, BIM); or the ancestors of Native Americans did not reach Beringia until just before 15,000 years before the present, and then moved continuously on into the Americas, being recently derived from a larger parent Asian population (direct colonization model, DCM).
“Thus, for this study the team set out to test the two hypotheses: one, that Native Americans’ ancestors moved directly from Northeast Asia to the Americas; the other, that Native American ancestors were isolated from other Northeast Asian populations for a significant period of time before moving rapidly into the Americas all the way down to Tierra del Fuego.
“Our data supports the second hypothesis: The ancestors of Native Americans peopled Beringia before the Last Glacial Maximum, but remained locally isolated until entering the Americas at 15,000 years before the present.”
“The team’s findings appear in a recent issue of the Public Library of Science in an article titled, “Beringian Standstill and Spread of Native American Founders.”

Tuesday, October 30, 2007

Reginald Shepherd on Samuel R. Delany

On his blog, Reginald Shepherd has written an engaging overview of the work of science fiction writer Samuel R. Delany, On Samuel R. Delany.

Here’s a quotation from Shepherd’s essay:

“Samuel R. Delany is a prolific science fiction writer, memoirist, self-described pornographer, literary critic, and social commentator. Since the publication in 1962 (when he was twenty) of his first book, The Jewels of Aptor, he has published numerous novels, short stories, essays, interviews, cultural commentary, and memoirs. What's most remarkable about this prodigious output is its consistent quality, wide range, and continual development. Delany has never been one to repeat himself or rest on his laurels. Unlike some writers who, beginning in the genre and subsequently seeking literary respectability, and despite his numerous works in other genres, Delany has always strongly identified himself as a science fiction writer. But his work has always pushed at and expanded the boundaries and conventions of the field, constantly seeking out new forms, ideas, and themes. Indeed, his work has become more challenging and complex over the course of his career.”

I’ve discussed Delany on this blog before (“Uses of Myth” and “Myth, Mythic Literacy, and Contemporary Culture”). Science fiction in general is a genre ethnographers should take seriously, given the parallel ways in which both involve the presentation in textual form of plausible worlds (though with the key difference that ethnography is ideally based on empirical fieldwork). Delany in particular is a science fiction writer worth taking seriously by anthropologists both for the consistently stimulating quality of his work and for the ways in which he takes seriously anthropological ideas and ideas from across the humanities and social sciences and incorporates them into his construction of plausible worlds.

Here is Shepherd again on a Delany novel that may be of particular interest to anthropologists:

Babel-17 (1966), inspired by the famous Sapir-Whorf hypothesis of linguistic determnism (that our language controls our thought), centers on the efforts of the poet Rydra Wong to crack what is believed to be a military code used by an alien race with whom Earth is at war. What she finally discovers is that this code is a highly exact and analytical language which has no word for “I,” and thus no concept of individual identity. The novel examines the capacity of culture and language not only to control the way people see and act in the world but to determine who they are as persons. “The limits of my language mean the limits of my world,” as Ludwig Wittgenstein so famously wrote. Two different words imply two different worlds.”

Saturday, October 27, 2007

Drinking and Cheating

As I noted a few posts ago (“On Why I’ve Not Posted Much Recently”), I recently attended the U.S. Department of Education’s annual meeting on Alcohol and Other Drug Abuse and Violence Prevention in Higher Education. One event I attended at this meeting was a “Town Meeting” (actually a fairly standard panel discussion, with short presentations by several panelists, followed by questions and open discussion) on the topic “Complementary or Contradictory Prevention Strategies: Finding a Balance Between Nonuse and Harm Reduction Messages.”

One speaker I found especially interesting was James Bryant, senior youth program specialist for Mother’s Against Drunk Driving’s UMADD program (basically MADD on university campuses). While most of the other panelists argued for a complementary strategy of emphasizing nonuse of alcohol for underage students, or those not wishing to drink on college campuses, alongside harm reduction messages for students who do choose to drink, Bryant, speaking specifically about underage students, argued forcefully and consistently for nonuse prevention strategies.

Bryant made a number of interesting arguments to this end. He pointed out that 18 – 21 year olds who go on to college have higher drinking rates than those who do not, an interesting correlation whether or not you accept his conclusion from this that there must be something about the atmosphere of college campuses that contributes to this (personally, I think he’s probably right on this), and that harm reduction strategies tend to reaffirm the naturalness of drinking on campuses (I find this claim plausible, but I’m not sure I’d consider it probable, much less proven – see my recent post, “Possible, Plausible, Probable, Proven.”)

His other arguments were basically that since underage drinking was illegal, and since students who don’t drink can’t drink and drive, then there should be consistent use of alcohol nonuse messages.

He then employed an interesting analogy. The rationale of harm reduction messages is that some students will drink anyway, so we should emphasize “responsible drinking” or “drinking in moderation.” He argued that that’s a bit like arguing that since some students will cheat on tests no matter what we do, that we should emphasize “responsible cheating” or “cheating in moderation” – something that, of course, no college campus would do.

While his talk was engaging and provocative, and while I do have the utmost respect for his organization, I ultimately found the analogy to be limited when applied to the university setting. There are two complications to the analogy. First, while underage students who drink might be “cheating,” students who are 21 or over are engaging in legal behavior when they drink – they’re not cheating. (They might do so illegally or illicitly if they drink in prohibited places, but their drinking per se is perfectly legal.) Second, while it’s true that people who don’t drink can’t drink and drive, it’s not the case that people who drink do necessarily drink and drive. That is, “drinking” doesn’t seem to me “cheating” (especially for of-age students) in the same way that “drinking and driving” might be, and harm reduction strategies are better suited to making such distinctions (perhaps in combination with nonuse messages for underage students).

In continuing to think about Bryant’s analogy, in particular the “cheating” side of thing, I actually began to realize that I tend to take a “harm reduction” approach to cheating. Bryant’s right that I wouldn’t ever tell students to cheat responsibly or in moderation, but in practice I tend to structure course assignments in such a way as to mitigate the harmful effects of cheating rather than emphasizing the policing of cheating. For example, I’m aware of how easy it is for students nowadays to copy and paste a document of the web to submit as a paper. When I assign papers, part of the assignment is to produce a number of shorter texts in stages (such as selection of the topic, an abstract, an outline with a detailing of the logical argument and sources of evidence for the paper, a rough draft, and a revised draft). In part, this helps students write better papers, and that’s my main reason for structuring the assignments this way, but it also means that it’s barely worth it for a student to plagiarize a text from the web, because they’ll have to recapitulate the process of having written it in the first place in order to get a decent grade (and they’ll end up learning something despite their best efforts not to).

Thursday, October 25, 2007

Economics, Human Evolution, Genetics, and the Obesity Epidemic

At a recent research symposium on Addictive and Health Behaviors Research, I heard an informative talk by Kelly Brownell, co-founder and director of the Rudd Center for Food Policy and Obesity at Yale University.

Brownell’s talk was titled “A New and Important Frontier: Food and Addiction.” A key topic of his talk was whether “food addiction” is a real phenomenon for some individuals or a bad analogy drawn with addiction to a variety of mind-altering substances. He concluded that, at least for some, food addiction probably is a real clinical phenomenon, drawing on several bodies of evidence: foods high in sugar or fat have been shown to cause dopamine production in a way similar to that of many drugs (i.e. the experience of pleasure from such foods is not just in the taste buds); there’s evidence of addictive behavior around such foods in some lab animals; the narratives and descriptions of favorite foods by “food addicts” mirrors that of drug addicts.

In the process of laying out his arguments about food addiction, Brownell gave an overview of the obesity epidemic in the U.S. over the past few decades. Much of what he covered was generally available knowledge, though his comprehensive synthesis of a vast amount of material was impressive.

These were by no means the only factors he addressed (see the Rudd Center’s website that I linked above for a fairly comprehensive overview of obesity research), but I was particularly struck by his comments on economics and human evolution.

Economics and Obesity

Brownell addressed economics and obesity in several ways.

Agricultural Economics and Obesity

As many are aware, industrial agriculture is heavily subsidized in the U.S. and many other developed countries. In the U.S., corn (maize) agricultural interests are particularly well set up with regard to subsidization of the industry. In its current form, such heavy subsidization dates back to the Nixon era, intended as a way to combat food price inflation.

An effect of this was the tremendous growth of corn and other agribusiness, and the development of a number of at the time unanticipated corn products (greater availability of corn oil and development of high fructose corn syrup), all kept artificially cheap by agricultural subsidies. A result of this is that processed foods high in fats and sugars are often quite cheap, especially when compared to prices of healthier foods, in particular the relatively high cost of fresh produce. So, for example, even while some fast food chains commendably offer healthy salad options, the healthy options tend to be quite expensive compared to the price of a meal of corn-fed-beef patties, potatoes fried in corn oil, and high-fructose-corn-syrup-laden beverages in giant portions.

Junk Food as a Caloric Bargain

High fat and/or high sugar foods tend to nowadays be available cheaply, at least in the U.S. and other developed countries – and increasingly this seems to be true elsewhere as well. Brownell made another interesting point here, though. If we look at food economics not just in terms of monetary cost but calories, junk food is a tremendous bargain. By weight, junk food is typically already cheaper than healthier food, but calorie for calorie, junk food is tremendously cheaper.

Poverty and Obesity

On top of the basic economics of food in the U.S. today, in impoverished communities, high fat and/or high sugar foods tend to be easily available relatively cheaply (even if not as cheaply as the same foods in other areas because of the lower incidence of full service grocery stores), while things like fresh produce are often hardly available at all and at higher prices, contributing to the problem of obesity in poor communities.

Human Evolution and Obesity

I was happy to see Brownell address a topic often left out of debates about obesity: human evolution. There’s strong evidence that humans generally take great pleasure in fatty or sweet foods (those dopamines mentioned above). This is something we share in common with other mammals, and is almost certainly something selected for in our evolutionary history.

This makes perfect sense – foods high in fats and sugars are caloric bargains, but are not particularly common in many natural environments. Animals who take pleasure in eating these foods would tend to seek them out more often and would tend to have an evolutionary advantage over those who didn’t.

But take this evolutionary heritage and add it to an economic environment unlike any our hominid or earlier primate ancestors ever adapted to, with an over-abundance of sugars and fats, and you get the obesity epidemic.

Genetics and Obesity

Both during his talk and during the question session, Brownell spoke of genetics as a factor in order to dismiss it as significant. I had been similarly dismissive of genetics as a significant factor in producing patterns of obesity before hearing this talk, and generally agree with his perspective here, particularly at the level of populations and gene pools: gene pools haven’t changed in the past 20-30 years in any significant way; the food environment has changed in multiple significant and obvious ways; therefore, genetics is not a serious consideration.

Interestingly, as I listened to Brownell present a position similar to that I have tended to take, I began to see the possibility for a change in genetic predispositions as a factor in obesity at the individual level. With increases in rates of obesity, we’re talking about a change to phenotype. Phenotype is always the product of genotype in interaction with environment. In this case, genotypes haven’t changed; it’s a variety of environmental factors that have changed; but that doesn’t mean that changing phenotype is solely the product of the changing environment necessarily, for phenotype is, again, always the product of that relationship between genotype and environment. A genotype that didn’t contribute to increased predisposition to obesity in one context might in another.

Still, I agree with a point that Brownell made during the Q and A session. Regardless of any potential genetic predisposition to obesity that some individuals may have, from a prevention or intervention stand point, it’s essentially irrelevant. At the population level, environmental factors are clearly the directly relevant ones and genetic predispositions aren’t something that can be particularly addressed at that level anyway. But even for individuals, for a person attempting to lose weight, the trick is to expend more calories than are taken in, irrespective of genotype.

Wednesday, October 24, 2007

Feminism as "F Word"

I’ve written previously of Adelin Gasana. Gasana is an undergraduate student at the University of West Florida, where I teach, and quickly developing his skills as a budding documentary video artist. A couple weeks ago, at the annual meeting of the Semiotic Society of America in New Orleans, he presented part of his most recent video, “The F Word.” This video and others can be found online at his website.

“The F Word” is about feminism, stereotypes of feminists and feminism, and attitudes towards feminism. One thing that particularly struck me when viewing the documentary was that much feminist discourse has become reactionary, responding to backlash and stereotype to emphasize what feminism and feminists are not rather than what they are. (Note that I’m not saying that Gasana’s video is reactionary, but that it depicts a discourse that has often become reactionary.) Speaker after speaker, in responding to questions about what feminism is replied in the negative first – essentially feminism is not a bunch of bra-burning, granola-eating, hairy-legged, clog dancing lesbians on a commune.

I realize that the speakers on the video are not representative of feminism in general. Gasana spoke mainly, though by no means exclusively, with feminists and/or local scholars in Pensacola, Florida. The South in general is one of the more socially conservative regions of the U.S., and Pensacola is arguably located in one of the more conservative portions of the South. This no doubt shapes the experience of feminists and other varieties of progressives. At times, it’s hard not to feel besieged as a progressive in Pensacola.

Still, the speakers on Gasana’s video are not completely unrepresentative of contemporary feminism in general either. There is a variety of contemporary feminism that works primarily in the negative – call it backlash-backlash, or something like that. I’ve not done any sort of systematic survey of the feminist literature, so I can’t say exactly how prevalent it is, but I’ve read enough feminist theory and scholarship that I’ve encountered this form of defining feminism by what it isn’t well beyond Pensacola and the South. In fact, some of the clips featured in Gasana’s documentary feature feminist writers speaking on national news talk shows.

There are two things that disturb me about this reactionary variety of feminism. First, it’s inherently self-limiting. It’s become defined by a conservative opposition’s stereotypes. Second, it’s marginalizing. It reminds me of the sort of gay scholar or activist who, in aiming for middle of the road respectability (and I think Texans might have it right when they claim that there are only dead armadillos in the middle of the road), emphasizes that flamboyant drag queens in pride parades don’t represent the gay community. While drag queens are perhaps not representative, they are important members of the gay community. While I can’t vouch for the bra-burning or clog dancing, I’ve met a number of granola-eating, hairy-legged, lesbians who live on communes who are staunch feminists not deserving to be marginalized in some game of respectability. Conservatives who would deny women or gay males equality are the opposition, not women or gay men who don’t happen to meet middle of the road standards of respectability that are in fact the standards and expectations of that conservative opposition.

Tuesday, October 23, 2007

On Why I've Not Posted Much Recently

In the past month or so, I’ve not posted as much on this blog as I would normally like to have done. There’s a reason for this, and it’s fairly simple - I’ve just finished preparing and delivering three presentations in the last four weeks: “Analysis of Students’ Cultural Models of Drinking and Related Contexts and Activities,” a poster co-written with Debra Vinci and presented at the 2nd annual Symposium on Addictive and Health Behaviors Research sponsored by the University of Florida and held at Amelia Island, FL; “Difficulty in Ethnographic Writing” (which I posted recently as a blog post), presented at the annual meeting of the Semiotic Society of America in New Orleans; and a workshop, co-presented with Mica Harrell, Rebecca Magerkorth, and Debra Vinci, “Building Campus Prevention Partnerships: Collaboration of Faculty and Student Affairs Administration in Implementing Evidence-based Alcohol Abuse Prevention,” presented at the U.S. Department of Education’s Annual conference on Alcohol and Other Drug Abuse and Violence Prevention in Higher Education held in Omaha, NE.

It’s been quite interesting in the span of a month to attend three such different conferences (a health sciences research symposium, an interdisciplinary semiotics conference, a conference emphasizing the importance of research and evidentiary base for programming but which was geared primarily to health programming and planning) in three very different places (a secluded resort directly on the Atlantic, New Orleans, and Omaha).

To my surprise, as a place, I enjoyed Omaha the most.

Amelia Island Plantation resort is a nice resort, and its seclusion emphasized a focus on the symposium’s activities, but I’m just not a big fan of resorts. They tend to bore me, and creep me out a bit with the excessive servility that tends to be expected of the hospitality staff.

I wouldn’t say I have a love/hate relationship with New Orleans, but more a love/repulsion relationship. I’ve long been attracted by many aspects of the city and repulsed by much else (such as the shenanigans along Bourbon Street and the endemic poverty that’s never seemed to get any better). Since Katrina, this has been deepened – I’ve been heartened with each visit I’ve made there by the ways in which parts of the city have recovered, but always leave with a heavy heart because of the many ways in which much of the city has not.

Omaha, though, surprised me. My apologies to residents of the city – I assumed it would be bland at best, but instead found a city that was much more interesting (especially in terms of the architecture of buildings and public spaces, as well as food – I enjoyed a good Indian restaurant and a decent Persian one) than I had expected.

In any case, now that I’ve completed an intense month of prepping for and attending conferences, I look forward to posting much more regularly here. I plan a short series of posts, starting tomorrow, to highlight and discuss interesting information and presentations I encountered at these three conferences.

Saturday, October 13, 2007

Recent News on Gay Men and HIV

In the past couple days, I’ve encountered two interesting news articles pertaining to current trends in HIV epidemiology among men who have sex with men.

Science Daily has published the article “Lack Of HIV Prevention For Male Sex Workers In The Caribbean Could Fuel AIDS Epidemic.”

The following two paragraphs are a quotation from the article:

“Male sex tourists, largely from the United States and Europe, may be fueling an HIV/AIDS epidemic in the Caribbean, and efforts to stop the epidemic will be severely hampered unless HIV prevention dollars are diverted to help male prostitutes, a new study suggests.

“Additionally, the study should serve as call to action for the tourism industry to implement HIV/AIDS prevention programs for tourists and tourism employees, said assistant professor Mark Padilla of the University of Michigan School of Public Health. The Caribbean is second only to sub-Saharan Africa in HIV/AIDS cases. The disease has been described as primarily heterosexual, Padilla said. However, Padilla's book shows that sexual contact between Caribbean male sex workers and male tourists may be a much larger contributor to the HIV/AIDS epidemic there than previously thought. Currently, prevention dollars in the Caribbean serve primarily heterosexuals, and this particular population of male sex workers who have sex with tourists is largely neglected. That population of male prostitutes grows larger as the traditional, agricultural jobs dry up. Funding comes from a variety of sources: governments, multilateral organizations such as the World Health Organization, and private foundations.”

The Oregonian has published “Guessing about HIV may keep epidemic going.” The following are quotations from the article:

“More than two decades after the first warnings that condoms prevent the spread of HIV, an increasing number of gay men are instead betting their lives on vague conversations and verbal assurances from their partner before having unprotected sex.

“The Centers for Disease Control and Prevention reports that nationally, the number of HIV and AIDS diagnoses among men who have sex with men increased 11 percent from 2001 to 2005. Researchers in Oregon and elsewhere say one reason could be that men attempt to sort themselves. HIV-positive men limit their partners to others with HIV; those without the disease avoid sex with those who have it. But some experts say it's more of a guessing game because too few men directly ask or answer, "Do you have HIV?"

“Serosorting is a shaky prevention strategy for healthy men, not so much because men lie to their sexual partners -- most don't, especially not those who are HIV positive. Instead, HIV prevention specialists say, men afraid of rejection or who are embarrassed to talk about sex dance around the topic, behavior also seen in heterosexuals. Gay and bisexual men might drop hints about taking medication, for example, and hope their partner understands they mean HIV medications.

“Some men, aware that anal sex is riskiest for the receptive partner, assume it's that person's responsibility to ask for a condom. Other men who say they're negative cite outdated HIV test results. And 1 in 4 people infected with HIV doesn't know it.”

Friday, October 12, 2007

Luna Shipwreck

Archaeologists from the University of West Florida’s Archaeology Institute and Department of Anthropology (where I teach) have just publicly announced the discovery of a 16th century Spanish ship in Pensacola Bay, almost certainly one of the ships associated with Tristan de Luna’s 1559 expedition to establish a settlement at Pensacola. The find is significant both because of the rarity of shipwrecks from the period in the Americas and because Luna’s expedition was one of the first (although not the first) attempts to establish a permanent European settlement in the Americas north of Mexico and the Caribbean.

As a member of the UWF anthropology department and a resident of Pensacola, I find it quite pleasant to see unqualified good news about the university and the city receiving national coverage. I expected prominent local news coverage of the announcement (see the Pensacola News Journal article here), but I was also pleasantly surprised to encounter Yahoo News picking up the AP newswire account in their U.S. national news section.

Generally, when Pensacola receives national attention it’s because of a hurricane, corrupt politicians, or something else bad. It’s not that plenty of good things don’t happen here – they’re just generally of local interest.

Kudos to the archaeology faculty, staff, and students who helped in making this discovery.

Thursday, October 11, 2007

Possible, Plausible, Probable, Proven

I wrote this post for the blog I write for a course, Peoples and Cultures of the World, and originally intended it primarily for a student audience. However, I think it fits well here as well.


“Possible,” “Plausible,” “Probable,” and “Proven” are terms used to indicate rough degrees of statistical probability of something happening or some proposition being true. (My use of the “probable” here reflects the vernacular. When we say that something is probably true, we don’t mean that it has just any level of statistical probability, but specifically that it is quite likely to be true.)

The terms do reflect an ascending order of probability (and a nested one – anything that is plausible is also possible; anything proven is also probable, plausible, and possible), though not in a numerically precise way. They represent a sort of qualitative statistics. When we can realistically indicate precise probabilities, that is obviously a useful thing, but even a rough sense of degree of probability is far more useful than no such sense at all.

Errors in thinking arise whenever we jump up this ascending ladder of probability without evidence, or without sufficient evidence (though admittedly, knowing what counts as sufficient evidence is always tricky business). Just because it’s possible that Bigfoot could be running around the Pacific Northwest or elsewhere doesn’t make it plausible, much less probable or proven.

The Possible

Saying that something is possible simply means that it does not violate the basic laws of logic. In the realm of empirical scholarship, one could also add that it does not violate basic physical laws, that something is both logically and physically possible.

The existence of Bigfoot is possible – it violates no logical or physical rules, but given the overwhelming lack of evidence, there’s no reason to regard Bigfoot’s existence as having anything but the lowest degree of probability. The same goes for claims about extraterrestrial influence in building the Egyptian Pyramids or Stonehenge or the Nazca Lines.

The Plausible

To say that something is plausible is to indicate that it has a higher probability than the merely possible - it is believable, it makes sense. But claims that are merely plausible (that is, that are not also probable) lack the evidence to be taken as having a high degree of probability of truth.

Thor Heyerdahl’s famous voyage on his Kon-Tiki raft from South America to Polynesia certainly proved that it was possible for people to have traveled from the one place to the other using fairly simple watercraft. He even made it plausible that Polynesians could have made voyages to South America, but his voyage alone did nothing to make such notions probable, much less proven. (See this news article from this past summer from Live Science on both Heyerdahl and more recent evidence of Polynesian voyaging to South America that I’ll discuss below.)

An article I encountered this morning on Science Daily, “Early Apes Walked Upright 15 Million Years Earlier Than Previously Thought, Evolutionary Biologist Argues,” makes what I’d consider a plausible claim. “An extraordinary advance in human origins research reveals evidence of the emergence of the upright human body plan over 15 million years earlier than most experts have believed. More dramatically, the study confirms preliminary evidence that many early hominoid apes were most likely upright bipedal walkers sharing the basic body form of modern humans.” So long as there’s evidence, it’s plausible that hominoid bipedalism might be much older than previously thought, but this is an extraordinary claim, and as such requires not simply a single study with good evidence, but a body of good evidence in order to be taken as probable, much less proven by many scholars.

The Probable and the Proven

To say that something is probable means that it is very likely to be the case, that it has a high degree of probability. To refer to something as proven implies that a claim is definitely true, though given the ever present possibilities of faulty observation (even systematic faulty observation), partial understanding or misunderstanding of empirical materials, nothing (at least outside the abstract realm of pure logic and mathematics) is ever demonstrated to be completely and irrevocably true. Instead, to say something is proven is really to say that it has such a high degree of probability of truth that we can pragmatically assume it to be true (though ideally keeping an open mind towards potential counter-evidence).

When Pizarro and his Spanish soldiers reached Peru, they encountered chickens (an Old World domesticated bird) already there. There are at least a couple ways the chickens could have arrived in the New World – they could have been brought by the very earliest European voyages to the Caribbean and Central America in the 1490s and 1500s and very rapidly diffused southward; or they could have been brought by Polynesian voyagers to South America (the only problem there being, at least until now, a lack of evidence of such Polynesian voyages having actually occurred).

When Captain Cook and other explorers encountered a variety of Polynesian islands in the late 18th century, they encountered sweet potatoes, among other crops being grown. As I understand it, there’s no definite evidence of how these South American plants reached Polynesia. They could have been brought by the Spanish to the Philippines early in the Colonial period and diffused from there to Indonesia, Melanesia, and ultimately Polynesia, or they could have been brought back from South America by Polynesians themselves.

New evidence released this past summer addresses this situation. Chicken bones were recovered in Peru that, according to carbon dating, predate Spanish voyages to the Americas by about a century. Further, genetic evidence links the chicken bones to Polynesian varieties of chickens. (See the previously cited article from Live Science and also this article from New Scientist.)

If the carbon dating and DNA evidence hold up (always an important consideration with important new claims), this proves that Polynesian chickens reached Peru at least on one occasion. Given the highly implausible nature of chickens making the voyage on their own (though not logically impossible), it makes highly probable if not proving claims that Polynesians came to South America on at least one occasion. It makes highly probable that the chickens seen by Pizarro were of Polynesian stock as well. I’d even go so far as to say that this new evidence makes probable the idea that Polynesians brought sweet potatoes back from South America directly, though the distinction between plausible and probable is a bit more ambiguous in this case.