Showing posts with label Technology. Show all posts
Showing posts with label Technology. Show all posts

John Fea's Virtual Office Hours

Randall Stephens

Historical Society member, Springsteen disciple, and historian extraordinaire John Fea recently launched a series of videos called "Virtual Office Hours" from his blog The Way of Improvement Leads Home.

So far he's created four clips.  In these Fea discusses some of the concepts that animate a course he teaches on historical methods.  Listen to Fea discuss "The Past is a Foreign Country," "In Search of a Usable Past," "Are Historians Revisionists?" and "How Do Historians Think?"

Fea does a wonderful job of explaining concepts like historicism, Whig history, and the much-misunderstood/maligned revisionism.  Students of history, and their profs, too, would do well to watch these.  And many of us who teach courses on historiography, methods, and the like could take a page out of Fea's playbook here.

Perhaps the next step would be to get students to use technology to engage the concepts and themes introduced in a methods class.  Maybe they could make group-project videos that explore the uses and abuses of the past, the role that history plays in forming policy, how our understanding of the past changes from one generation to the next, and more. 

Blueberries

Dan Allosso

When I read old books, I’m always on the lookout for references to other old books, or to topics that were relevant when the book was written, but that may not be well known now.  These sometimes lead in new and surprising directions.  There were several things in Bolton Hall’s Three Acres and Liberty, the book that launched the back-to-the-land movement in 1907, that seemed to deserve more investigation.  The thing that really jumped out at me, though, was a passing remark he made about blueberries.

In spite of being a hardy native plant that the Indians had harvested from time immemorial, Hall says “with our present knowledge of the blueberry, it is doubtful if it can be made a commercially cultivated crop.”  This surprised me, since one of my family’s favorite activities when we lived out East was picking blueberries at a big berry farm in the shadow of Mount Monadnock.  But Bolton Hall was no dummy.  Three Acres and Liberty describes a variety of intensive gardening techniques that are popular today (and that many people think were invented by their current proponents), including the use of manure instead of commercial fertilizers; “super close culture,” where plants are set very close together to use the land and water efficiently and keep down weeds; “companion cropping” and “double cropping,” to extend the growing season; rotation to reduce the impact of pests; soil inoculation using nitrogen-fixing legumes (just recently discovered when he wrote); mulching to save water; raising chickens, ducks and rabbits to use waste and produce food and manure; canning and drying to preserve even small quantities of food; and even disposal of city sewage by using human waste on urban gardens.  So I had to believe he was right about blueberries not being commercially viable in 1907.  And of course, the obvious next question was, when did this change?

Apparently, Hall wrote those words just about as late as he could have.  Commercial blueberry production began in Maine in the last quarter of the nineteenth century, but the source of the berries were native plants that propagated themselves and spread as they had always done.  The only change was that growers removed the surrounding trees to give the blueberries more room and light.  However, in 1911 a New Jersey woman named Elizabeth Coleman White (1871-1954) read a USDA Bulletin about experiments in blueberry propagation.  She invited the bulletin’s author, botanist Frederick Colville, to the New Jersey pine barrens, where blueberries grew wild as they did in Maine.  White and Colville got the locals, who picked the wild berries regularly, to tag the bushes where they found the largest fruits.  They asked the local pickers questions about taste, time of ripening, plant vigor, and disease resistance, and brought the best plants back to the family’s farm in Whitesbog.  By 1916, White and Colville had created the “Tru-Blu-Berry,” America’s first commercial blueberry.  In 1927, White helped organize the New Jersey Blueberry Cooperative Association, which still exists today.

I’ve only scratched the surface of this story – there’s a lot that could be done with a topic like this! – but the thing I like most about it is that White and Colville were smart enough to use the expertise and local wisdom of the poor folk, the “pineys,” who went out into the barrens to pick wild fruit.  The Progressive Era is remembered as a time when top-down, expert-driven solutions became all the rage.  Often these scientific innovations were imposed on rural people without consultation, much less consent.  And often these changes were much less valuable and lasting than the experts promised.  So it’s great to find a story where the innovation came from a cooperative process, and led to a tangible and lasting improvement.  I’ll think of this next summer, when I pick the fruit from the eighteen blueberry plants of half a dozen varieties I planted this fall.
____________

Sources:  

Distinguished Women of Past and Present: http://www.distinguishedwomen.com/biographies/white-ec.html

USDA Technical Bulletin #275, 1932: http://organicroots.nal.usda.gov/download/CAT86200269/PDF (This is really cool! Did you know the USDA has an online National Agricultural Library with an “Organic Roots Digital Collection”?  I didn’t until today.  Here it is: http://organicroots.nal.usda.gov.)

Early Color Film

Heather Cox Richardson

In the late 1930s, Charles Cushman began to experiment with color film. For the next three decades, his photos documented the technological and social changes in America in striking images.

The images bring home the human dimension of history. Tractors didn’t just replace horse-drawn carts; farmers driving tractors down dirt roads passed farmers driving their horse-drawn carts the other way. Seeing the men in Cushman’s images brings home the human element imbedded in historical change. It’s impossible not to imagine the pride of the farmer with the new-fangled tractor as he sports the newest technology past his less-well-off neighbor, and to suspect that the man with the horses feels both left behind and superior to the man who has jumped on the latest fad. The photos draw you—and with luck, students—in, putting human experience of the twentieth century’s momentous changes front and center.

They are well worth a look.

Good Fences

Dan Allosso

Robert Frost famously epitomized New Englanders with the wry phrase, “Good fences make good neighbors.”  But even if your neighbors are far enough away for comfort and you like them, fences have their uses.  I’ve been thinking about these as I continue to work on 19th-century American history while starting up a small farm in the upper Midwest.  It’s interesting, because I suspect I’m living through a moment of historic change, and it’s all about fences.

In addition to influencing the relationships of neighbors, I’m learning fences have a number of other uses on the farm.  Of course, they help keep your animals where you want them.  And hopefully they help keep predators off your animals.  And they may keep wildlife off your vegetables, although hungry deer will jump any fence less than eight feet high.  Less obviously, though, fences define our relationship to the land and the uses we can put it to.

Most everyone is familiar with the story of the colonial split-rail fence.  There’s one on the cover of William Cronon’s Changes in the Land.  The rail fence, roughly cut from the timber settlers needed to clear in order to turn wild eastern forest into farmland, symbolizes European ideas of land use and ownership that settlers brought with them and imposed on the environment and the natives they found there.

This style of fencing was cheap and easy where settlers found trees needing to be cleared.  I took this photo at the Genesee Country Village and Museum in western New York.  This section of the museum represents life around the year 1800, when farming was a family enterprise done with ox, horse, and human power (I spent a 4th of July weekend in that cabin with my family as "The 1800 Farm Family").  An energetic farmer could clear about seven acres of land in a year, and often the family farmstead was split between a small cultivated field, a pasture for grazing animals, and a woodlot for fuel.  As families moved west, however, they discovered plains of prairie grasses that towered over the heads of children like Laura Ingalls.  The wooden fences of the East were impractical in many parts of the Midwest, where lumber came from far away at great expense, and was reserved for building things like houses, barns, churches and saloons.  And without internal combustion and irrigation, much of the land farther west was unfit for cultivation, but ideal for grazing if the animals could just be contained.

Joseph Glidden (1813-1906) was a New Englander who moved to Illinois in 1843.  He patented barbed wire in 1873 and died a millionaire.  Among his holdings were 335,000 acres in Texas: range land that his invention had allowed to be fenced.  The enclosure of the rangelands is one of the mythic moments in the story of the American West.  Through books and movies like The Virginian (1902), Oklahoma (1943), Shane (1953), Heaven’s Gate (1980), and Open Range (2003), it is as central to popular western history as Frederick Jackson Turner’s comments about the closing of the frontier are to the academic West.  Barbed wire fences dramatically expanded our ability to affordably control very large spaces.  Once again, Americans were able to impose our vision on the land (and also, once again, on the Indians).

Fences remain important to farmers, and their use is still a complicated affair.  Cattle and horses can be grazed on pasture enclosed by a few strands of barbed wire.  Sheep, with thick fleeces to protect them, will go through barbed wire.  Goats are even harder to contain – there’s an old saying that if your fence won’t hold water, it won’t hold goats.  And although chickens will usually come back home in the evening, there are a lot of varmints out there that will eat them in the meantime if they aren’t protected by a fence.  Farmers have used woven wire, hardware cloth, rigid panels, and electric wire to contain and protect animals.  Each comes at a price, and it adds up: a decent four-foot high sheep and goat fence will run you over a dollar a foot.  So these fences tended to be expensive and permanent.  Most small farmers use and endlessly reuse a variety of materials based on what they can get cheap, and hoard the bits they aren’t currently using.

As sustainability and soil depletion have come into sharper focus in recent years, innovative farmers have rediscovered what the old-timers knew before the age of chemical fertilizer: pastures will support a larger number of animals if they are grazed in succession.  Sheep and goats prefer to eat different plants than cows, so they can coexist with cattle on a pasture without competing.  And then the poultry can follow, eating bugs out of the droppings; which not only breaks up the fertilizer and spreads it over the fields, but also actually reduces the number of parasites and pathogens.  This is a win-win-win, the animals are better off, the farmer produces a larger quantity and wider variety of protein on a given plot of land, and the land itself is improved in the process.  The only catch is, you have to enclose and protect all these different types of creatures!  

That’s where the story gets interesting.  The cost of fencing has traditionally made it difficult for farmers to fence appropriately for intensive pasturing, and the effort involved in setting and moving fences has made land use inflexible.  But recently, battery-powered low-impedance fence chargers and moveable electric fences have changed the game again for small farms.  Deep-cycle batteries like the ones in your boat or RV can run miles of low-cost electric tape, twine or netting.  They can even be hooked to solar chargers.  And they’re easy to set up and move, allowing farmers to raise temporary paddocks and move animals as quickly or slowly as needed over the land.

This may not seem like such a big deal, but I think it may turn out to be.  The world’s food supply depends heavily on fossil fuels, both for transportation and for the production of synthetic fertilizers like anhydrous ammonia.  It currently takes fifteen calories of energy to put a calorie of food on your table.  If there’s any truth to either climate change or peak oil, multi-thousand acre cornfields and factory-style feedlots may turn out to be as much of a twentieth-century anomaly as McMansions and jet-setting to conferences.  But it has been suggested that the world’s food needs could be met by intensive techniques combining grazing with gardening.  Farmers like Joel Salatin claim that not only would intensive pasturing solve the world food problem, but “in fewer than ten years we would sequester all the atmospheric carbon generated since the beginning of the industrial age” (Folks, This Ain’t Normal, p. 195).  If true, this is a really big deal; and even if Salatin is not quite right about this, intensive pasturing still seems like a really good idea.  And these new fences make it possible.  That could be historic.

The Bouncing Boundaries of Europe

Heather Cox Richardson

I have thought a great deal this winter about nationalism. It seems to me that the rise of the internet, international trade, and NGOs begs us to ask whether or not nationalism was a twentieth-century phenomenon that had little meaning before the mid-nineteenth century, and will have little meaning after the mid-twenty-first century. The bouncing boundaries in this video seem to reinforce that suggestion:

Pardoning Alan Turing

Heather Cox Richardson

Last week the British House of Lords declined to pardon Alan Turing for the crime of being gay. Convicted of indecency in 1952, Turing chose chemical castration rather than a prison term. Two years later, he killed himself by ingesting cyanide. Perhaps not ironically—since such symbolism was almost certainly intentional when committed by such a brilliant individual—he administered the poison to himself in an apple.

Alan Turing is widely considered to be the father of the modern computer. He was a key figure in Britain’s World War II code breaking center at Bletchley Park, inventing a machine that could break ciphers, including the difficult German Enigma codes. After the war, he continued to work in the world of artificial intelligence. Engineers still use the Turing Test to judge a machine’s ability to show intelligent behavior.

In 2009, Prime Minister Gordon Brown made a formal apology to Dr. Turing. Noting that the brilliant scientist had truly helped to turn the tide of war, Brown called it “horrifying” that he was treated “so inhumanely.” “While Turing was dealt with under the law of the time, and we can't put the clock back, his treatment was of course utterly unfair, and I am pleased to have the chance to say how deeply sorry I and we all are for what happened to him,” Brown said. “So on behalf of the British government, and all those who live freely thanks to Alan's work, I am very proud to say: we're sorry. You deserved so much better.”

The formal apology was followed by an on-line petition asking British government officials to pardon Turing. By February 2012, 23,000 people had signed it. Last week, the Justice Minister declined to do as they asked. “A posthumous pardon was not considered appropriate as Alan Turning was properly convicted of what at the time was a criminal offence,” he explained.

Never shy about his defense of gay rights, columnist Dan Savage compared the conviction of Turing to the conviction of a Swiss man who also broke a law we now find appalling. In 1942, Jakob Spirig helped Jewish refugees from Germany cross into Switzerland, and was sent to prison for his crime. In January 2004, the Swiss government pardoned Spirig, and all other people convicted for helping refugees escaping Nazi Germany. Savage asked the House of Lords: “Did the Swiss government err when it pardoned Jakob Spirig? Or did you err by not pardoning Alan Turing?

Much though I hate to disagree with Dan Savage, who could rest on his laurels for the It Gets Better Project alone, I’m not a fan of pardoning people who have committed the crime of being human under inhumane laws. This describes Turing. He doesn’t need a pardon; the society that made him a criminal does. As the Justice Minster went on to explain: “It is tragic that Alan Turing was convicted of an offence which now seems both cruel and absurd, particularly... given his outstanding contribution to the war effort…. However, the law at the time required a prosecution and, as such, long-standing policy has been to accept that such convictions took place and, rather than trying to alter the historical context and to put right what cannot be put right, ensure instead that we never again return to those times.”

An apology is appropriate; a pardon is not.

Some things can never be put right. Pardoning a dead victim for the crime of being hated is a gift to the present, not the past. It lets modern-day people off the hook. They can be comfortable in their own righteousness, concluding that today’s injustices have nothing to do with such right-thinking people as they are. But they do. Laws reflect a society, and the ones that turned Turing and Spirig into criminals implicated not just their homophobic or pro-Nazi fellow citizens, but all of the members of their society who accepted those laws. A pardon in a case like Turing’s is a Get Out of Jail Free card not for him, but for us.

It’s way too late to pardon Alan Turing. And it’s way too early to pardon ourselves.

How Streaming Media Services Affect our Perception of “Owning” Music and Movies

Philip White

Despite the company’s recent price increases, the decision to split its DVD delivery and streaming businesses and the lamentable choice to name the former “Qwikster” (as one friend commented, “It sounds like fast-drying spackling!”), I am an avid Netflix fan. And if the company can increase its still-inadequate library of on-demand content, this miser may eventually ditch my old, 500-pound behemoth of a TV and invest in one with Netflix streaming built in, or maybe just a Roku box. Right now, I occasionally watch a movie on my HTC Flyer tablet, which is a better viewing experience than an iPhone/iPod but still a little rinky dink for my liking.

So why does the ability to get movies without waiting for a DVD to arrive or, heaven forbid, leaving the house to patron the nearest Redbox, appeal? Because it’s quick, convenient, offers a (soon to be) wide choice and there’s a predictable, all-you-can-watch fee instead of an individual charge per disc. And if I sometime think that Amazon’s Instant Video has a better selection, maybe I’ll forsake Netflix.

So that’s the good, but what about the bad or potentially bad? How is the rise of streaming film and TV content affecting studios large and small, and the actors, producers, directors, crew members and others they employ? Were some of the same questions asked when other new technologies were rolled out? The television? The videotape machine?

Certainly, DVD and Blu-Ray sales are down. And movie prices continue to rise, much to my horror. $12 for a ticket? In the middle of Kansas? Really? I also loathe the gimmicky “cinema suites” that offer a crappy buffet and cheap beer if you’re willing to fork over $20 bucks or more per ticket, and possibly the shirt off your back, too. But how much of these price hikes and the luxury concept that seems to be borrowed from major league sports’ premium on suites and boxes is attributable to movie studios, and how much to the theater companies themselves? I admit that I don’t know.

What I do know is that the ability to stream movies and music on demand, on mobile devices as well as at home, is profoundly affecting how we think about owning this content. The point of buying a DVD (I still haven’t succumbed to the allure of Blu-Ray, though after being blown away by watching Saving Private Ryan in this format on my friend’s big screen it has been tempting) used to be that you could watch one of your faves whenever you like. Well, with streaming you can do that, while removing the embuggerance of actually having to get up from the couch, or, in my case, trusty leather recliner.

When I fully embrace streaming, there will be no disc to scratch, misplace or lose to the clutches of the kids. My wife and I will no longer yell at each other for me absent-mindedly putting The Fellowship of the Ring (mine) in the case meant for The Devil Wears Prada (hers). And if a film is bad, there will be no wait while I send the accursed item back to the Netflix warehouse, and then wait for them to send out the next DVD in your queue. Heck, even without switching to video on demand and just using Netflix’s plain ol’ two DVDs at a time plan, I buy less than a quarter of the DVDs I purchased even two years ago. And those I do get or request for a birthday or Christmas present are true favorites, rather than the mediocre films I kinda liked but only watched once a year that I used to purchase or hint at before Netflix. So, for me at least, I’ve almost completely abandoned DVD ownership, without even jumping headlong onto the bandwagon. And I’m not alone. According to Time DVD sales were down by 18.3% in the first six months of 2012, while spending on kiosk and on-demand services was up by 40 to 45%. Movie studios and distributors are doing their best to reverse this trend by ensuring that physical copies are available for rent and purchase a lot sooner than via Redbox, Netflix or Amazon.

The mass adoption of movie streaming is the biggest historical change in the film industry since the

introduction of the VHS-playing VCR to the U.S. in 1977. Once this format had smacked down Sony and its upstart Betamax (what is it with Sony and propriety formats? Anyone remember MiniDisc?), VHS gave people the ability to watch high quality (for that time) productions in their own home, as well as the ability to record live TV. With Netflix and its kind, the focus has shifted again, as it’s now no longer necessary to have a home-based content device, as a tablet or even a smart phone will suffice. We’ve gone from a cinema-based model to home-focused to mobility-focused, which in apt, given the ever-greater ease of international travel and the greater geographical transience within America today.

And what of music? I am not a Pandora user, nor have I logged into Spotify since signing up. I’ve come to prefer Amazon’s MP3 store to iTunes because of lower prices and my love of using the Amazon Cloud Player on my tablet, and usually buy digital instead of CDs, but I still buy as much music as I did a few years ago. Not sure that’s typical though–I know a lot of people, particularly in the UK where Spotify is more established–who listen to music almost exclusively through streaming offerings, whether it be radio or a paid subscription service.

In the U.S., how long will it be before the majority of major labels make 25 percent of revenue through streaming music subscribers to Spotify et al? Is this already diminishing CD and MP3 sales? The concept is similar to the instant video vs. Blu-Ray/DVD debate–a large and soon-to-be unlimited selection at your fingertips against the tangibility and permanent ownership of a physical product. I still find it easier and safer to plonk in a CD while in the car rather than messing around with an iPod, but with voice-controlled media systems becoming more prevalent (the new iPhone, for instance) and the continued success of satellite radio, that may change soon. I was late to the Netflix party, so maybe eventually I will succumb to the charms of music on demand, and stop buying music. I’m determined that vinyl will be the exception–still loving my Technics SL-1700 turntable and the sleeve design and notes on records old and new!

Thoughts? Do you primarily stream music and movies, or still buy individual copies (be they digital or physical)? Are the media changes afoot much greater than those of the 20th century?

Impressions

Chris Beneke

My understanding of art history is tenuous. At best. But one thing I’ve learned from the popular science writer Jonah Lehrer is that a revolution in 19th-century painting coincided with the advent of a disruptive new technology.* That technology was the camera, and the artistic innovation that it encouraged was Impressionism. With the emergence of the camera, Lehrer writes, “painting lost its monopoly on representation.” Once the static could be captured by a mechanical device, the painter’s comparative advantage resided in his or her ability to convey the fleeting, sensory-laden character of everyday experience. Representation gave way to impression, symbol, and expression.

There may be a lesson here for academia, and historians in particular. Educationally related technological breakthroughs of recent decades—yellow lined paper, VHS players, Laserdiscs, PowerPoint, the insulated thermos mug—could be harnessed by the lecturing professor in the traditional classroom. DVDs and YouTube allowed the professor to illustrate her points with a vivid film clip, or to catch a rejuvenating 45-minute nap. However, the larger cyber universe won’t be so easily tamed. The internet, as we have been told, is a genuinely disruptive technology. There will be no napping.

None of this is news. Dan Allosso has been writing about the radical and generally positive impact online learning is likely to have. I wrote something myself a couple of years ago. And nearly every day, someone pronounces the end of the university as we know it. Usually, that person is Kevin Carey, but not always. Online learning clearly presents a challenge to the way things have been done. (If you doubt it, ask yourself whether you are capable of giving a better lecture on a particular topic than anyone in the world—or check out Jonathan Rees’ blog.) It’s concurrence with an increasingly untenable college cost structure should be worrisome to all of us.

Setting aside the daunting tuition and student debt issues, the parallel rise of the camera and Impressionist painting offers us an example of how a disruptive technological change can result in the sort of transformative change that Allosso, Carey, Rees and others been talking about. Like the Impressionists, we need to capitalize on the ephemerality and distinctiveness of each classroom situation, every day. We also need to presume that the seats bolted to the floors in our lecture halls and classrooms will not be occupied because a professor happens to be standing in front of them delivering the same lecture—one now easily recorded and distributed—he has been giving for the past 15 years. Because of the web’s capacity for delivering knowledge to us in the comfort of our homes or our carefully guarded Starbucks tables, the live lecture’s marginal utility as a means of conveying static truths to a passive audience has diminished, maybe forever.

History teachers need not wholly despair. For years, pedagogical experts (don’t smirk, there is some truth to the designation) have been telling us that students need to be actively engaged in order to learn better anyway. Until now, many of us have been able to evade the implications of that insight because our anecdote-riddled sixty-minute accounts of past events have been so, well, engaging. But like the 19th-century artists who found that their value as purveyors of verisimilitude had faded, we too need to develop creative ways to use history to expand our audience’s understanding of the world. That’s a cliché I know—like telling a baseball team that it needs to win one game at a time. And this process will prove challenging for people like me who have always seen ourselves as doing our job best when we represent the past most faithfully. But it may already be past time for us to think seriously about painting water lilies.

________

* It’s conceivable that my art history problem is related to the fact that I derive my conclusions about the subject from popular science writing, but I digress.

What the Amazon Tablet Means for Bookselling

Philip White

Yep, I’m well aware that this is not a technology blog. But for the next few paragraphs, please indulge me by reading how a forthcoming, yet-to-be-confirmed gadget (and, while we’re on the subject, the Borders meltdown) will impact the bookselling business.

So here’s the rub. In November, Amazon will almost certainly introduce its own tablet. Now, on the surface, this may provoke a yawn and something like, "Really, another iPad clone?" And you would be justified for such a response--numerous non-Apple manufacturers have tried to emulate the success of the iPad. Most (even my own beloved HTC Flyer, which, unlike the rest, allows you to write on the screen, synch the notes to Evernote and perform a full-text search to find them--a true digital notepad) have sold a moderate number of units, but nothing to trouble the Cupertino-based behemoth. Others, like the ill-fated Motorola Xoom and, even worse, the discontinued HP Touchpad have tanked.

But here is where Amazon is different. Unlike its afore-mentioned peers, the company isn’t trying to create an all things to all people device, and doesn’t have some kind of inferiority complex about the iPad that leads to throwaway features such as a rear-facing camera--which will own the Worst Tablet Thingy Nobody Uses title until tablet makers get the hint and stop including it.

By contrast, Amazon CEO Jeff Bezos and his crew know their customers--not least because they have millions of data points from all the amazon.com transactions--and realize what they want. With the Kindle, this was the largest selection of ebooks, long battery life, no screen glare, and the convenient, portable size of a paperback book. With the forthcoming Amazon Kindle Tablet or whatever they call it, the first three of these will hold true. The fourth is trickier because it will be a glossy touchscreen instead of an e-ink display. However, from my own tablet experience, if you set the background color to sepia and reduce the contrast to minimum, the eye strain issue typical of reading on a shiny screen goes away.

Amazon also knows that its customers want music and movies on the go, and don’t want to worry about storing them on a tiny device. So what has it done in advance? Boosted the selection of its instant video streaming service, and launched its own cloud-based music offering. Amazon’s MP3 store is a winner--consistently offering albums cheaper than iTunes with an equal selection. There are rumors that those who buy the Kindle Tablet will get the video subscription fee waved for a year. Double win. And it’s not a coincidence that on the "Shop All Departments" menu on the Amazon homepage, its video, MP3 and e-book "departments" are #s1, 2 and 3 on the list. The next reason for buying the Kindle Tablet will be the price: the estimated $250 cost is half that of the cheapest iPad. Due to its convenient form factor, focus on e-books, movies and music (instead of apps, a market that Apple dominates) and brand loyalty, I predict that the device will own the #2 spot in the tablet sales charts.

Anyway, I digress--back to books. The ability to reach tens of millions of customers via one device (Kindle) put Amazon in a strong position in its continuing pricing negotiations with publishers. With two such devices (not counting the Kindle DX flop here), this will be solidified. Amazon is also riding the wave of the Borders implosion. As publishers will no longer to fight over table space at the big B--though Barnes & Noble is still a factor--they will focus their attention on getting books onto Amazon’s "best of" lists, on offering early discounts for forthcoming bestsellers, and to finally getting e-book pricing right. Now, the debate over what an ebook is worth will continue, because Amazon knows its users are ticked off when ebook prices go over 10 bucks (some Kindle user groups even flag such books to dissuade others from buying), and the publishers need to maintain the perceived value of books and boost per-copy margins. But with the power of its ever-wider reach, Amazon will be, for better or worse, in a stronger position to tip that argument towards its side of the see-saw. It will also be fascinating to see how the success of the Kindle Tablet and decline of Borders affects hardback and softback prices on amazon.com. If only we’d see the start of e-book and hardback bundles.

Another factor that bolsters Amazon’s position as the primary bookseller is its publishing arm. Initially focusing on obscure books and relatively unknown authors, this is now pinching big-name writers such as Timothy Ferriss, author of The Four-Hour Work Week. And the interesting thing? Instead of being wooed, Ferriss and his fellow defectors come to Amazon! As the company expands its roster and its catalog, it is developing a seamless, end-to-end model--signing talent, publishing their books, and selling these titles through its own distribution point. This is a powerful new variation of Amazon’s self-publishing empire, which already has an almost unassailable position in that ever-growing niche.

How traditional publishers will compete with these factors remains to be seen. From a writer’s perspective, the benefits of going with a publisher instead of Amazon’s publishing arm are undeniable--skilled editors with years of experience, publicity managers with large contact books, and relationships with newspapers, magazines, radio stations and bloggers. Same goes for the "why to sell rights to a publisher instead of self-publishing" list, to which you can also add guaranteed money up front. Also, it’s unclear (to this writer, at least) whether Amazon will publish physical copies or just ebooks. If it’s just the latter, are they missing out on potential revenue, or merely reading the tea leaves and seeing ever-increasing ebook sales, diminishing hard copy revenue and more bookstores closing? Regardless, Amazon is the dominant force in bookselling, and with its latest gadget soon to be unveiled and its publishing wing taking off, that won’t be changing any time soon.

Computing Machines

Randall Stephens

How did we get here?

I'm typing this on my MacBook Pro, a laptop that is a gazillion times more powerful and "pro" than the towering, whirring, always-freezing-up computers I used back in grad school. In fact, my iPad is much faster on many applications than the Dell laptop I carted around five years ago. (Check out Dan's great post from February on a related topic.)

For a little wisdom on the early days of computing and the accelerated pace of change, have a look at this clip of a BBC documentary from the early 1990s.


.-- .... .- - / .... .- - .... / --. --- -.. The Telegraph and the Information Revolution

Heather Cox Richardson

On May 24, 1844, Samuel Morse sent his famous telegraph message, “What hath God wrought?” from the U.S. Capitol to his assistant in Baltimore, Maryland. (See the dots and dashes in the title.) Morse had begun his career as a painter, but, the story goes, keenly felt the problems of communication over distance when his wife took ill and died while he was away from home. By the time he got the hand-delivered note warning him that she was sick, she was already buried.

While it had been a personal crisis that inspired Morse to pursue the telegraph, the importance of the new machine reached far beyond families. The telegraph caused a revolution in the spread of information in America. The information revolution, in turn, changed politics.

Notably, historians have credited the telegraph with hastening the coming of the Civil War. Before the time of fast communication, politicians could cater to different voters by making contradictory promises. Antebellum Democrats and Whigs could endorse slavery in the South and attack it in the North. Since news rarely traveled far, their apostasy seldom came back to haunt them.

The telegraph changed all that. It offered voters a new, clear window on politics. Now reporters could follow politicians and send messages to editors back home.

But faster communication did not necessarily mean accuracy. On the contrary, partisan editors tried to position their journalists in critical spots so they could control the spin about what was going on. They happily spun stories that would discomfit politicians they opposed. As the sectional crisis heated up, the telegraph enabled partisan editors to portray far away events in ways that bolstered their own prejudices.

On-the-spot reporting took away politicians’ ability to ignore the gulf between North and South. It forced white Southerners to defend slavery, and made Northerners sensitive to the growing Southern power over the government. The political parties could not remain competitive nationally, partisanship rose, and the country split. The result was bloody.

Information might come faster with the telegraph, but it was not necessarily more accurate. The same could be said about radio and television, which provided more information than ever, but still used a strong editorial filter.

Now a new tool has the potential to deliver the accuracy the telegraph promised. The internet provides even faster and more thorough information, with far less editorial filtering than ever before. This has given us instant fact-checking, in which politicians who vehemently deny saying something often find one of their statements with those very words posted to YouTube. It also gives us immediate commentary by specialists on a subject under discussion, judging the value of a proposed policy.

The internet has also given us a sea of bloggers who follow local developments and produce verifiable information that would never make it onto an editorial desk but that might, in fact, turn out to be part of a larger pattern. Joshua Marshall at www.talkingpointsmemo.com put such information together during President G. W. Bush’s second term to uncover the U.S. Attorney removals, a story the mainstream press initially missed altogether.

The web has the potential to break down editorial partisanship, but this accuracy has an obvious stumbling block. Will readers be willing to investigate politics beyond their initial biases to entertain a range of ideas and reach clear-eyed decisions about policies? Sadly, studies so far indicate the opposite, that people use the internet to segregate themselves along partisan lines and reinforce their prejudices rather than to tear them down.

The telegraph initially promised to break the close relationship of politics and the press by giving people access to events unfiltered by partisan editors. It failed. The telegraph only increased the partisanship of the news Americans read. Now the internet has the potential to break the ties between the press and politics for real. But can it, in the face of entrenched political partisanship?

One hundred and sixty seven years after the telegraph tapped out its famous words, we’re still struggling with the same questions.

A Blast from Our Tech Past

Randall Stephens

Is it true that one-third of the world's population will have watched the royal wedding? Wow. . . (And you thought you could get a break from hearing about THE event of 2011. Nope.)

In the scope of modern history, live broadcasts and
recording technology are such recent developments.

This video from YouTube is an example of primitive video tech. President Eisenhower's 1958 address deals with communication and the challenges of the future. Ironic that he seems to be having such trouble getting just the right words out! (It might count as one of the worst presidential speeches of modern history.) The script takes a decidedly World's Fair tone--the progress rhetoric that will inform amusement parks like Epcot. The World of Tomorrow!!

According to a little history of TV site that the FCC has put together:

In 1956 the Ampex quadruplex videotape replaced the kinescope; making it possible for television programs to be produced anywhere, as well as greatly improving the visual quality on home sets. This physical technology led to a change in organizational technology by allowing high-quality television production to happen away from the New York studios. Ultimately, this led much of the television industry to move to the artistic and technical center of Hollywood with news and business operations remaining on the East Coast.

In 1957 the 1st practical remote control, invented by Robert Adler and called the "Space Commander," was introduced by Zenith. This wireless, ultrasound remote followed and improved upon wired remotes and systems that didn't work well if sunlight shone between the remote and the television.

This "Golden Age" of television also saw the establishment of several significant technological standards. These included the National Television Standards Committee (NTSC) standards for black and white (1941) and color television (1953). In 1952 the FCC made a key decision, via what is known as the Sixth Report and Order, to permit UHF broadcasting for the 1st time on 70 new channels (14 to 83). This was an essential decision because the Nation was already running out of channels on VHF (channels 2-13). That decision gave 95% of the U.S. television markets three VHF channels each, establishing a pattern that generally continues today.

Thus the "Golden Age" was a period of intense growth and expansion, introducing many of the television accessories and methods of distribution that we take for granted today. But the revolution – technological and cultural – that television was to introduce to America and the world was just beginning.

To see how this techsplosion would later impact modern families, watch the charming little BBC series Electric Dreams (2009), now airing on PBS. Gotta get back in time, minus the DeLorean: "The Sullivan-Barnes family from Reading are a thoroughly modern family who own the latest in 21st century gadgetry. In a unique experiment they were stripped of all their modern tech and their own home was taken back in time so that they could live with the technology of earlier decades. The family lived a year per day starting in 1970 right up to the year 2000."

Archives and History: Notes from the New England Archivists Conference

Dana Goblaskas

Archives and history have always been fields that are closely intertwined; without archives, historians would suffer a loss of many valuable primary sources, and without a sense of history, archivists would have no context in which to place their collections. Without history, frankly, archivists would be out of a job.

Or would they?

The idea of a divide between the disciplines of archives and history may seem unimaginable to many involved in those fields, but according to James O’Toole, the Charles I. Clough Millennium Chair in History at Boston College, a split may already be forming.

This “archival divide” was mentioned a few times during the recent spring conference of the New England Archivists, held at Brown University, April 1-2. One of Saturday morning’s first sessions, titled “Is Archival Education Preparing Tomorrow’s Archivists?” featured a lively discussion (as lively as a room full of archivists can be at 9 am, anyway) about how the field is changing and how education is changing in response. To sum up: as records shift from paper to electronic formats, archival education is beginning to stress competence in digital preservation, database management, and knowledge of web architecture and social media. Some members of the profession are concerned that a rift is growing between students interested in the digital realm of archives and those more attracted to the “analog” side of things—the manuscripts, photographs, and other ephemera that spring to mind when one thinks about an archive.

After an hour’s worth of conversation about how to bridge that gap between digital and analog spheres, O’Toole—formerly a professor of archival studies at UMass Boston—broached the question: Where does history fit into all of this? Based on what had been covered in the session so far, it seemed like the whole idea of history was taking a backseat to the new technical aspects of the profession. O’Toole expressed concern that archival educators may be growing so obsessed with teaching new technologies that they’re no longer placing emphasis on understanding historical context.

Silence filled the room as veteran and fledgling archivists alike reflected on what this observation could mean for the future of the profession. Maybe it was just me, but there seemed to be a very faint sense of panic in the air, especially when another session attendee wondered aloud what would happen in twenty years when many “classically trained” archivists retire, leaving the young technical turks in charge.

Before any lurking sense of doom could take over, a voice from the back of the room spoke up, identifying herself as a student enrolled in the library science and history dual-degree Master’s program at Simmons College. With several fellow students beside her nodding in agreement, she explained that there’s no need to panic quite yet; there are still some archivists-in-training who feel that history is hugely important, not only in order to have an understanding of context, but also to know how to conduct historical research, which has the added bonus of helping archivists better understand and assist researchers.

Though this enthusiastic Simmons student helped to quell the panic a little bit, O’Toole’s point is still a distressing one. Later that day, at the conference’s closing plenary session, he discussed how some archival neighbors, such as historians, aren’t “in the neighborhood” anymore, and how the profession needs to hold on to its roots as it explores new and exciting technologies. I certainly hope the educators in the crowd—and the mentors, students, and others working in the field—heard his message. The digital vs. analog divide may be more popular in archival discussions these days, but the sneakier split growing between archives and history may be the one that proves deadly to the profession if left unchecked.

Index cards are so 1985

Jonathan Rees

Today's guest post comes from Jonathan Rees, professor of history at Colorado State University - Pueblo. He's the author of Representation and Rebellion: The Rockefeller Plan at the Colorado Fuel and Iron Company, 1914-1942 (University Press of Colorado, 2010). He also blogs about historical matters at More or Less Bunk.

I’ve never taken a poll on the subject, but I strongly suspect that many of my fellow historians first encountered a college library the same way I did: as a member of their high school debating team. If by chance you weren’t a debate geek like me, let me briefly explain the way the system worked (and still does). The National Forensics League, the big national high school debate group, would give all students in the country a big, broad topic. The one I remember most fondly from my years in high school was court reform. You would research a more specific reform to propose when you were taking the affirmative. My partner Ahmed and I proposed a federal reporter shield law that year. You had to have a case for reform with plenty of specific factual evidence ready before the summer was even over if you were going to compete successfully on a national level. That’s why you had to go to a college library, to find lots of relevant information fast.

The harder side to argue was always the negative. While you could prepare your affirmative case in advance, you never knew what reform the other team would propose until their first speaker started talking. That’s when you had to make a mad dash to your file box of index cards to prepare a crash course on just about anything so that you could convince the judge to shoot their case down. Speed was of the essence. If you couldn’t gather your evidence before it was your turn to speak, you might very well stand up there with no experts to cite, and who’d believe you then?

When I went to graduate school, I took my debate-ready research habits with me. My dissertation was like a big affirmative case with loads of index cards covering every aspect of my subject and huge piles of copies replacing the debate briefs that some firms sold in order to make arguing anything easier. Lucky for me, there was no time limit. I’m not talking about the overall project (which I got done in what was a very reasonable time for a history PhD, if I say so myself). I’m talking about finding individual quotations from sources that I’d copied or transferred to index cards. I can’t tell you how many hours I spent digging through cards and papers looking for something I knew I had read, but couldn’t exactly remember where.

When I started my second book in 1999, the one after my dissertation, I decided to rectify this problem. I bought an early-computerized notes program. After writing a different book in the interim, I just finished almost all of the manuscript from that earlier project in a major writing tear over this last summer. As a result of my delay, it took me ten years to realize how great computerized notes programs really are. It was hard enough back in graduate school to find things that I’d read only a month or a year before. Try finding things that you wrote down over a decade ago! Even the program I bought way back in 1999 allowed me to search my notes by individual words. This not only saved me time, it made it possible for me to quickly regain intellectual control over a huge amount of information.

Recently, I asked two separate historians whose work I greatly admire what notes program they used. In each instance, they looked at me like I was speaking Greek. I tried to explain to them the advantages that I’ve described here, but they were both of the “If it ain’t broke, don’t fix it” school of research. Certainly using pen and paper for notes won’t prevent them from doing more great work in the future (albeit slower than it would otherwise have to be), but I figure my students might as well keep up with the times. I’m teaching both the undergraduate and graduate history research classes this semester, so I’ve required them to use the newest generation in notes programs: Zotero.

Unlike the hundred dollars I plopped down in 1999, Zotero is free. It was created by the Center for History and New Media at George Mason University and it’s really quite an incredible program. It not only allows you to search through your notes by word or by category the same way my decade-old notes program did, it allows you to pull in PDFs or screenshots from the web and search through those too. Suppose you find a full-view book on Google Books that you like (a common occurrence for those of us who work on American history before 1923). Zotero will record the entire lengthy, complicated URL automatically so that you can get back to it easily. Furthermore, you don’t need a web connection to use Zotero, so you can enter information manually and search through that the same way that you find material in these easy-to-record web items.

Until very recently, I would have said the one pitfall of using Zotero is that it only worked through the Firefox browser, a browser that has looked less and less useful to me the more I experiment with other choices. It turns out they just took care of that problem. Indeed, you can now get Zotero as a stand alone program so that you don’t even need to use a browser at all.

With the introduction of Google Books, newspaper databases like Chronicling America from the Library of Congress and comprehensive journal databases like Jstor, history research has changed forever. You don’t need to be near a great library to have access to scads of excellent primary sources. The main problem that students and historians alike now face, if they want to write about the last two or three hundred years, is not too little information, but too much. In the future, the quantity of sources will tell us little about research, it’s the ability to find the right information for any given point that will matter the most. You can still write history using methods that stood in good stead back in 1985, but if there’s a new way to manage gobs of information faster, why wouldn’t you want to try it?