Monday, March 23, 2009

IP Crimes and Vices by Jeffrey A. Tucker

 IP Crimes and Vices by Jeffrey A. Tucker

IP Crimes and Vices

by Jeffrey A. Tucker

The 18 or so articles I've written about "intellectual property" – elaborating on a book I consider to be a seminal work of our epoch, Against Intellectual Monopoly – generated floods of email like I've never seen on any topic. The thing that gets people going is the conclusion: in a free market, there should be no legal grants of patent or copyright.

What many people do – and this is rather depressing from the point of view of a writer – is seize on the conclusion, ignore the reasoning and arguments, and then attempt an instantaneous, arm-chair refutation.

It always goes something like this: "Oh, you are telling me that I could just steal this article that you wrote, even put my name on it, sell it and take the money, and there would be nothing wrong with doing that?"

Some go even further to actually do this: put their names on it, post it somewhere, and send me the link.

I think precisely what you are thinking: "What a jerk!"

I'm not sure what other kind of response they expect from me. They must really think I will say: "Oh, this is so shocking! I had not considered that someone might actually do this to me if we got rid of the U.S. Copyright Office! My goodness, this kind of thing cannot be tolerated. I was completely wrong in everything I said. I too am grateful to the state for all it does to protect my intellectual creations and my good name."

Sorry to say, this is not my response. My detailed response actually goes as follows: "If you do that in a free society, you will not be arrested by the police or experience physical coercion blessed by official mandate. However, everyone is free to regard you as a poseur, a wretch, a menace to society, and wholly lacking in credibility. If having a good reputation counts to you, it's probably not a good idea to pretend to have written something that you have not in fact written."

The difference here comes down to a wonderful distinction that was made by Lysander Spooner in the 19th century. He was careful to explain the difference between a vice and a crime. A crime involves aggressive force or threat of aggressive force against another person or privately owned property. A vice, however, is a much larger category of behaviors that don't involve invasion of person or property.

Vices can involve lying, being nasty to others, eating like a pig in public, abusing oneself with drugs or liquor, failing to shower and thereby stinking to high heaven, swearing in public, betraying benefactors, rumor mongering, displaying ingratitude, not keeping commitments, being a shopaholic, being a greedy miser, failing to do what you say you are going to do, making up stories about other people, taking credit for things you didn't do, failing to give credit where it is due, and other things along these lines.

In a free society, vice is controlled through decentralized social enforcement of social, ethical, and religious norms. The great problem of statism is that it turns vices into crimes, and then when the law is repealed, people forget that there are, after all, certain social norms that nonetheless need to be upheld and will be upheld once society is managing itself rather than being managed by the state.

Consider the case of classroom plagiarism, for example. A teacher wrote to me with a concern that the repeal of intellectual property law would make it more difficult to punish students for turning in work that claimed to be original but was actually copied from elsewhere. I pointed out that the police and courts are not involved in the enforcement of classroom rules now, so why would a change in federal legislation be any different? Plagiarism is still plagiarism.

IP law has really had the effect of distorting our society's sense of all of these matters. It has made everyone too unwilling to admit our dependence on imitation and emulation as institutions that permit and encourage progress. It has made people too shy to copy the success of others and admit to doing so. Writers, artists, entrepreneurs all live with this weird burden of expectation that everything they do must be completely original and they must never draw from others sources. It's preposterous!

On the other hand, we are too quick to credit the state for preventing the mass outbreak of old-fashioned vice. Even without copyright and patent, some kinds of behaviors and practices will remain shoddy, unseemly, ungracious, conniving, and social unacceptable. What, for example, would you say about a local author who claimed to write a new play that turned out to be written by Shakespeare? Doing this is perfectly legal right now. But the person would be regarded as a lout and a fool for the rest of his life.

Hence, the repeal of "intellectual property" law does not mean some sort of crazed free-for-all chaos in which no one can be entirely sure of anyone's identity, creations, who wrote what, what company did what, where credit is due, what one's commitments are, and the like. What we will gain is a great sense of our moral obligations to each other.

And in the absence of the state's grant of monopoly privilege, we will become ever more vigilant in giving credit where it is due. You still have to be a nice person who acts with a sense of fairness, equanimity, and justice, as conventionally understood. If you don't, the state will not crack your skull, but you will lose something profoundly important.

In other words, in the absence of IP, we gain a greater sense of the distinction between what is vice and what is crime, and a better means for dealing with both.

March 19, 2009

Jeffrey Tucker [send him mail] is editorial vice president of www.Mises.org.

Copyright © 2009 by LewRockwell.com. Permission to reprint in whole or in part is gladly granted, provided full credit is given.

Jeffrey Tucker Archives

IP Crimes and Vices by Jeffrey A. Tucker

Wednesday, March 4, 2009

Copyright Holders Challenge Sites That Scrape Content - NYTimes.com

Copyright Holders Challenge Sites That Scrape Content - NYTimes.com 

Copyright Challenge for Sites That Excerpt

By BRIAN STELTER

When the popular New York business blog Silicon Alley Insider quoted a quarter of Peggy Noonan’s Wall Street Journal column in mid-February, the editor added a caveat at the end: “We thank Dow Jones in advance for allowing us to bring it to you.”

The editor added “in advance” because Dow Jones, the publisher of The Journal, had not given the blog permission to use the column. The excerpt was published with the assumption that it would be permitted under the “fair use” statute of copyright law.

Generally, the excerpts have been considered legal, and for years they have been welcomed by major media companies, which were happy to receive links and pass-along traffic from the swarm of Web sites that regurgitate their news and information.

But some media executives are growing concerned that the increasingly popular curators of the Web that are taking large pieces of the original work — a practice sometimes called scraping — are shaving away potential readers and profiting from the content.

With the Web’s advertising engine stalling just as newspapers are under pressure, some publishers are second-guessing their liberal attitude toward free content.

“A lot of news organizations are saying, ‘We’re not willing to accept the tiny fraction of a penny that we get from the page views that these links are sending in,’ ” said Joshua Benton, the director of the Nieman Journalism Lab at Harvard. “They think they need to defend their turf more aggressively.”

Copyright infringement lawsuits directed at bloggers and other online publishers seem to be on the rise. David Ardia, the director of the Citizen Media Law Project, said his colleagues kept track of 16 such suits in 2007. In 2004 and 2005, it monitored three such suits each year. And newspapers sometimes send cease-and-desist orders to sites that they believe have crossed the line.

Some publishers complained last week when Google News, a site that aggregates headlines from thousands of news sources, added advertising to its search results.

Last December, GateHouse Media sued The New York Times Company, alleging copyright infringement after local sites associated with The Boston Globe, a Times Company newspaper, copied the headlines and lead sentences of GateHouse’s newspaper articles. The case was settled out of court in January.

In another case, which is pending, The Associated Press sued the online news distributor All Headline News last year, saying that it had improperly copied A.P. articles.

The legal disputes are emblematic of a larger question that has emerged from the Internet’s link economy. The editors of many Web sites, including ones operated by the Times Company, post excerpts from competitors’ content from time to time. At what point does excerpting from an article become illegal copying?

Courts have not provided much of an answer. In the United States, the copyright law provides a four-point definition of fair use, which takes into consideration the purpose (commercial vs. educational) and the substantiality of the excerpt.

But editors in search of a legal word limit are sorely disappointed. Even before the Internet, lawyers lamented that the fair use factors “didn’t map well onto real life,” said Mr. Ardia, whose Citizen Media Law Project is part of the Berkman Center at Harvard Law School. “New modes of creation, reuse, mixing and mash-ups made possible by digital technologies and the Internet have made it even more clear that Congress’s attempt to define fair use is woefully inadequate.”

For now, Web sites are defining it themselves. Sites like Alley Insider and The Huffington Post are ad-supported businesses that filter the Web for readers, highlighting what they deem to be the most meaningful parts of newspaper articles and TV segments.

Alley Insider, according to its editor in chief, Henry Blodget, operates under a digital golden rule: “To excerpt others the way we want to be excerpted ourselves.” The post about Ms. Noonan’s column, including five full paragraphs, had explicit credits to the author and the newspaper, three links to the source and a direct encouragement to users to read the original column.

Alley Insider doubtlessly exposed new readers to Ms. Noonan’s column, and an unknown number of users followed the links to The Journal’s Web site. But others probably did not follow the link, meaning that Alley Insider alone — and not The Journal — reaped the advertising pennies from the excerpt.

The Huffington Post, the popular news and opinion forum co-founded by the author and columnist Arianna Huffington, is perhaps the star of the excerpting debate. Ms. Huffington’s editors are especially adept at optimizing the site for search engine results, so that in a Google search, a Huffington Post summary of a Washington Post or a CNN.com report may appear ahead of the original article.

“We want to both drive traffic to ourselves and drive traffic to others,” Ms. Huffington said in a telephone interview. Adding that “we are at the beginning of developing the rules of the road” online, she said the site’s editors were “constantly talking” about appropriate excerpting conduct.

To the extent that the site republishes articles produced by other organizations, “we excerpt to add value,” Ms. Huffington said, sometimes by combining articles, videos and transcripts. Much of the Web works this way, skimming quotes and photos from other sources while trying to remain within the provisions of fair use.

Ms. Huffington said that The Huffington Post, which had more than 20 million unique visitors in January, received more than 100 requests for links each weekday from reporters, editors and public relations representatives. “Everybody wants to be linked to,” she said.

That is true as long as readers follow those links. The prevailing wisdom is that content should roam widely online, but lackluster digital advertising of late has called that into question.

That has fueled a round of recent commentaries about payment models for online news. Cablevision, the owner of the Newsday newspaper, said Thursday that it would “end distribution of free Web content.” Hearst, the owner of 16 newspapers, said Friday that it would charge for some content on its Web sites.

Widespread excerpting would seem to make pay models harder to impose. Even more troubling for news organizations is blatant copying. In December, The Huffington Post’s new Chicago off-shoot was accused of copying the full contents of local publications’ concert reviews. Ms. Huffington called it a “mistake made by an intern.”

Other sites copy content from news organizations using automated syndication feeds. The sites typically display text or show ads around the excerpts to make money.

GateHouse’s suit against The New York Times Company contended that the company was “link scraping” by automatically aggregating articles from GateHouse newspapers, to be excerpted on local news sites operated by The Boston Globe.

“They felt that The Globe was benefiting too much from the work of GateHouse journalists,” said Mr. Benton of the Harvard journalism lab. The Times Company denied that it was scraping GateHouse’s site and said that its use of GateHouse content did not violate copyright laws.

It also said that GateHouse’s Web sites copied headlines and other text from Times Company sites. Last month the Times Company agreed to stop copying GateHouse’s headlines and lead paragraphs.

It remains to be seen whether excerpting standards from before the Internet age still apply. Mr. Ardia said that quoting “is often a sign of respect” online.

“The norms are developing outside — or ahead of — the law,” he said.

Alley Insider’s partial republication of Ms. Noonan’s column, for instance, was edited shortly after it was posted online. The reason, Mr. Blodget said, was that the excerpt seemed slightly too long.

Copyright Holders Challenge Sites That Scrape Content - NYTimes.com

Hollywood-Funded Study Concludes Piracy Fosters Terrorism | Threat Level from Wired.com

 Hollywood-Funded Study Concludes Piracy Fosters Terrorism | Threat Level from Wired.com

Hollywood-Funded Study Concludes Piracy Fosters Terrorism

By David Kravets EmailMarch 03, 2009 | 5:46:54 PMCategories: Intellectual Property

Here's a startling coincidence: A study funded by Hollywood concludes movie piracy is hurting the industry and fostering terrorism.

The Motion Picture Association, the European counterpart to the Motion Picture Association of America, paid for the 182-page RAND Corp. study.

Here's a snippet from Film Piracy, Organized Crime, and Terrorism:

Moreover, three of the documented cases provide clear evidence that terrorist groups have used the proceeds of film piracy to finance their activities. While caution must be exercised in drawing broad conclusions from limited evidence, further investigation is a timely imperative. These cases, combined with established evidence for the broader category of counterfeiting-terrorism connections, are highly suggestive that intellectual-property theft — a low-risk, high-profit enterprise — is attractive not only to organized crime, but also to terrorists, particularly opportunistic members of local terrorist cells.

Last year, then-Attorney General Michael Mukasey uttered a similar view when he said intellectual property theft promoted terrorism.

Find the Rand report here.

See Also:

Hollywood-Funded Study Concludes Piracy Fosters Terrorism | Threat Level from Wired.com

Google’s Digitized Book Project Hinges on a Retro Kind of Search - NYTimes.com

Google’s Digitized Book Project Hinges on a Retro Kind of Search - NYTimes.com 

A Google Search of a Distinctly Retro Kind

By NOAM COHEN

Last month an e-mail message washed up at the offices of The Cook Islands News in the South Pacific. It was a request to place a half-page advertisement in the newspaper, which has a circulation of 2,500. The cost was $370.

“We were amazed — it came from out of nowhere,” the newspaper’s editor, John Woods, said in a telephone interview. “We are very skeptical of ads like that.”

Even more surprising was who was paying for it: Google.

Google, the online giant, had been sued in federal court by a large group of authors and publishers who claimed that its plan to scan all the books in the world violated their copyrights.

As part of the class-action settlement, Google will pay $125 million to create a system under which customers will be charged for reading a copyrighted book, with the copyright holder and Google both taking percentages; copyright holders will also receive a flat fee for the initial scanning, and can opt out of the whole system if they wish.

But first they must be found.

Since the copyright holders can be anywhere and not necessarily online — given how many books are old or out of print — it became obvious that what was needed was a huge push in that relic of the pre-Internet age: print.

So while there is a large direct-mail effort, a dedicated Web site about the settlement in 36 languages (googlebooksettlement.com/r/home) and an online strategy of the kind you would expect from Google, the bulk of the legal notice spending — about $7 million of a total of $8 million — is going to newspapers, magazines, even poetry journals, with at least one ad in each country. These efforts make this among the largest print legal-notice campaigns in history.

That Google is in the position of paying for so many print ads “is hilarious — it is the ultimate irony,” said Robert Klonoff, dean of Lewis & Clark Law School in Portland, Ore., and the author of a recent law review article titled “Making Class Actions Work: The Untapped Potential of the Internet.”

So far, more than 200 advertisements have run in more than 70 languages: in highbrow periodicals like The New York Review of Books and The Poetry Review in Britain; in general-interest publications like Parade and USA Today; in obscure foreign trade journals like China Copyright and Svensk Bokhandel; and in newspapers in places like Fiji, Greenland, the Falkland Islands, and the Micronesian island of Niue (the name is roughly translated as Behold the Coconut!), which has one newspaper.

The almost comically sweeping attempt to reach the world’s entire literate population is a reflection of the ambitions of the Google Book Search project, in which the company hopes to digitize every book — famous or not, in any language, published anywhere on earth — found in the world’s libraries.

Under the proposed settlement, reached on Oct. 28 and still subject to court approval, there must be an effort the court finds “reasonable and practicable” to find authors and publishers — especially copyright holders of so-called orphan books, which are still in copyright but long out of print. So the task means placing at least one advertisement in every country in the world.

One reason courts have required such heroic efforts to reach the people covered by a settlement is that unless parties opt out of the settlement, they are automatically opting in. The least that must be done, the argument goes, is let those affected know about it.

But as it turns out, authors and publishers are hard to track down. More than members of most settlement classes, said Kathy Kinsella of Kinsella Media in Washington, which is directing the ad campaign, these are a particularly diffuse group.

“We looked at how many books were published in various areas,” she said, “and we knew from the plaintiffs and Google that 30 percent were published in the U.S., 30 percent in industrialized countries. The rest of the world is the rest.”

“We had some choices,” she added. “We thought it made sense that in order to meet the due-process standard that we were as broad-based as possible.”

So, using United Nations data, her company created a list of countries and territories. Some nations, including Iraq, Afghanistan and Iran, were excluded because they do not agree to international copyright terms. In others, like Cuba, North Korea and Myanmar, her company is prohibited from buying ads because of United States trade embargos, Ms. Kinsella said.

Kinsella Media also hired a company to run the telephone line that takes calls, which, Ms. Kinsella said, raised its own questions: “How do you handle calls in 80-some languages around the world? How do you staff that? Is it worth having someone in French all the time?”

Michael Boni, a lawyer representing the Authors Guild, one of the parties that sued Google, acknowledged there was an aspect of “belts and suspenders” in using print and the Internet to spread the word about the settlement, but he added that “the Internet is not used to the same extent outside the U.S.”

“I have been doing class-actions for over 20 years, and I don’t think there is a notice program as comprehensive as this notice program,” he said.

For centuries, legal notices have been a reliable source of income for newspapers and, more recently, trade publications and television. Class-action notifications constitute a significant chunk of this revenue, with an estimated $50 million to $75 million spent a year, the bulk going to print advertising, according to Todd B. Hilsee, a communications expert in Philadelphia who advises courts on the issue.

Fran Biggs, the office manager at the weekly Penguin News in the Falkland Islands, said she was surprised by the Google settlement ad in her paper: “I suppose it did seem a bit odd, but if people are paying for it, why not?” She added that the advertising climate there is not as dire as it is in the rest of the world. “We never have any problems filling up pages,” she said. “We have a bunch of big stores.”

At The Cook Islands News, the advertisement led to a follow-up article the next week that quoted a prominent resident author, Ron Crocombe, who praised the convenience of the currently available Google Book Search (books.google.com), which publishes excerpts and tables of contents. He said it was useful “especially for us in small places like Rarotonga, where there are no big libraries or big bookshops.”

It turns out, however, that in the Google matter, the advertising side of the newspaper bent its rule of insisting on payment in advance. That led to a few nervous moments when the paper had not received its money. By the week’s end, however, the editor, Mr. Woods, reported that he had received a credit card authorization.

Google’s Digitized Book Project Hinges on a Retro Kind of Search - NYTimes.com

Pirate Bay Trial Ends; Verdict Due April 17 | Threat Level from Wired.com

 Pirate Bay Trial Ends; Verdict Due April 17 | Threat Level from Wired.com

Pirate Bay Trial Ends; Verdict Due April 17

By Wired Staff EmailMarch 03, 2009 | 7:40:09 PMCategories: Yo Ho Ho

Special correspondent Oscar Swartz reports.

STOCKHOLM -- The Pirate Bay trial wrapped up here Tuesday amid a media circus as attorneys for the four accused founders of the world's most notorious BitTorrent tracker proclaimed their clients' innocence to charges of facilitating copyright infringement.

One of the attorneys declared the 2-week trial a mockery.

"These kinds of abstract case are not supposed to be brought to the court at all," attorney Per E. Samuelson said during his argument. "The prosecutor has not managed to keep calm in light of the enormous pressure and lobbying from record and film companies."

Scandinavian television teams and journalists descended upon this small courtroom to see the final hours of a case generating international headlines. The three younger of the four defendants are by now media celebrities in their own right and seemed at ease with the frantic attention they provoked.

Documentary filmmakers working on a third part of Steal This Film were feverishly capturing the drama. Reuters turned up with TV cameras and conducted interviews in English in response to "demand from our international clients."

Open culture artists like Sebastian Lütgert who ran the Pirate Cinema project in Berlin had flown in to show support for the four Pirate Bay co-defendants. One spectator showed journalists an authentic police report from a station in Stockholm where he had reported Google for facilitating copyright infringement.

For Pirate Bay fans, however, the day started in a subdued mood but ended in a bang with Samuelson's oration.

Defense lawyers for defendants Fredrik Neij and Gottfrid Svartholm Warg seemed to mostly nitpick about technicalities, but did not seem to punch significant holes in the prosecution.

Peter Sunde's lawyer, Peter Althin, turned up the heat by orating a historical exposé of how vested interests have tried to block development through legal wrangling. He mentioned that musicians fought radio, that the VCR was almost outlawed and that authors even questioned libraries. He claimed his client was only a spokesman for the tracker and challenged the industry's claim of $13 million in damages.

Althin reminded the court that industry bosses testified that CD and movie-ticket sales were dwindling on account of the Pirate Bay, which claims some 22 million users. But his own witness testified there was no scientifically established causal link between file-sharing and diminishing revenues.

All the while, defense attorney Samuelson captivated the gallery.

Formally representing Carl Lundström -- the 48-year-old outsider business executive who provided bandwidth and rack space for the Pirate Bay -- Samuelson spoke for all the defendants and handed over a stack of legal cases to the court.

"I don't think the prosecutor ever considered that such a case is not supposed to go to court," Samuelson stated confidently. "It comes into conflict with basic Swedish criminal law. There is not a single law textbook that does not clearly state that in order to be an accomplice you have to be aware of the concrete main crime that you are supposed to facilitate."

This was not the case here, he said. None of the defendants had any specific knowledge of the 33 copyright infringements charged in the case. "It is not enough, according to Swedish law, to have a general knowledge that crimes may be committed."

He said the entire case was "illegal according to Swedish law."

After it was over, the Pirate Bay crew was seen cracking jokes with prosecutor Håkan Roswall and others in the courthouse lobby.

The panel of four judges is expected to issue a verdict April 17. The defendants face up to two years in prison each and $180,000 in fines plus millions in damages.

----

Editor's note: Threat Level extends its thanks to Swedish writer Oscar Swartz for his first-rate reporting from the front line of the Pirate Bay trial.

See Also:

Pirate Bay Trial Ends; Verdict Due April 17 | Threat Level from Wired.com

ACRL - ACRL Scholarly Communications 101 Road Show

ACRL - ACRL Scholarly Communications 101 Road Show 

ACRL offers scholarly communication 101 road show at no cost to you

Bring the workshop “Scholarly Communication 101: Starting with the Basics” to your campus, compliments of ACRL. Recognizing that scholarly communication issues are central to the work of all academic librarians and all types of institutions, ACRL is pleased to underwrite the costs for delivering proven content to you. Two expert presenters emphasize experiential learning in this 3-hour workshop. You need to collaborate with at least one other local academic library and arrange the logistics.

Program Description

This structured interactive overview of the scholarly communication system underpins individual or institutional strategic planning and action. Four modules focus on:

• new methods of scholarly publishing and communication
• copyright and intellectual property
• economics
• open access and openness as a principle

The workshop is appropriate for those with new leadership assignments in scholarly communication as well as liaisons and others who are interested in the issues and need foundational understanding.

Learning Objectives

Participants will:

• Understand scholarly communication as a system to manage the results of research and scholarly inquiry and be able to describe system characteristics, including academic libraries and other major stakeholders and stakeholder interests, major types and sources of current stress and evolution, and key indicators of size, complexity, and rates of change
• Enumerate new modes and models of scholarly communication; business models; research & social interaction models (from blogs, curated websites, etc), and peer review models and examples of the ways in which academic libraries have or can initiate or support those models
• Be able to select and cite key principles, facts, and messages relevant to current or nascent scholarly communication plans and programs in their institutions, e.g. as preparation for library staff or faculty outreach, to contextualize collection development decisions

Successful Applicants Must

• Include participants from more than one institution.
• Minimum participation is 35, maximum of 100 individuals, to allow for maximum interactivity.
• Provide a statement of support from hosting authority, i.e. library director/dean, consortia/association administrator, or ACRL chapter leader.
• Provide a brief essay (1 page maximum) explaining what your institutions will do after the workshop to maintain momentum, engagement, and awareness.
• Apply by April 13, 2009.
• Host this event by August 31, 2009.

Preference to

• Hosts who are organizational members of ACRL (Not sure? Ask ACRL staff member kmalenfant@ala.org to check for you.)
• Hosts who identify an experienced local presenter to partner and deliver workshop content.
• Diversity of institution types represented among participants (i.e. 2 year, liberal arts, masters comprehensive, doctoral)
• Diversity of types of library staff participating (i.e. liaison librarians, catalogers, access services staff, senior management)

Host Responsibilities

1) Registration

a) Marketing and publicity of the workshop (print, Web, e-mail)
b) May consider this as an opportunity to invite staff outside the library (i.e. research office, graduate college).
c) Management of selection process, if any
d) Management of registration process (i.e. issuing registration receipts, rosters, etc.)
e) Limit participation to 100 individuals (minimum participation is 35), to allow for maximum interactivity
f) Participant and presenter name badges

2) Event coordination and logistics to include:

a) Reservation of meeting space per room requirements provided by presenters
b) On-site A-V technology and support
c) Planning and associated costs of food and beverage for break (if any)
d) Printing and copying of handouts in advance
e) Volunteer staff as needed

Complete an application form at:  https://www.surveymonkey.com/s.aspx?sm=WMvXmmVOnEiUCm6oWl_2b12A_3d_3d by Monday, April 13, 2009. The ACRL Scholarly Communication Committee will review applications, selecting several locations, based on number of requests and capacity, The committee will aim for geographic diversity and notify applicants of their status by Friday, April 24, 2009.

Expert presenters may include:
• Lee VanOrsdel, Dean of University Libraries, Grand Valley State University
• Joy  Kirchner, Project Manager, Scholarly Communications & Sciences Collections Librarian, University of British Columbia Library
• Molly Keener, Reference Librarian, Wake Forest University Health Sciences
• Sarah Shreeves, Coordinator, IDEALS, University of Illinois at Urbana-Champaign

Questions about the program or how to apply? Please contact Kara J. Malenfant, Scholarly Communications and Government Relations Specialist, ACRL, at kmalenfant@ala.org or 800/545-2433 ext. 2510.

ACRL - ACRL Scholarly Communications 101 Road Show

Tim O'Reilly makes the argument for Open Publishing @ TOC 2009 on Vimeo

 Tim O'Reilly makes the argument for Open Publishing @ TOC 2009 on Vimeo

Tim O'Reilly makes the argument for Open Publishing @ TOC 2009

by Open Publishing Lab @ RIT

See all Show me

Open Publishing Lab @ RIT's videos

Drawing upon his real world experiences, Tim O'Reilly shares his thoughts on Open Publishing, why its a good idea, and how to make it work. This video was taken on the floor of the 2009 O'Reilly Tools of Change conference in New York City.
For more information on Tim O'Reilly (and why he knows what he's talking about), head to oreilly.com/ or follow him at: twitter.com/timoreilly
You can read (and download) "What is Web 2.0" at:
oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html
The video was produced by the Open Publishing Lab at RIT. The OPL is a cross disciplinary open-source publishing research center. To learn more about the OPL and our open-source publishing projects, visit opl.rit.edu or follow us at: twitter.com/ritopl
This video was shot using a Kodak Zi6 Digital Video Camera kodak.com/go/Zi6

Tim O'Reilly makes the argument for Open Publishing @ TOC 2009 on Vimeo

Science in the open » What is the cost of peer review? Can we afford (not to have) high impact journals?

Science in the open » What is the cost of peer review? Can we afford (not to have) high impact journals? 

Late last year the Research Information Network held a workshop in London to launch a report, and in many ways more importantly, a detailed economic model of the scholarly publishing industry. The model aims to capture the diversity of the scholarly publishing industry and to isolate costs and approaches to enable the user to ask questions such as “what is the consequence of moving to a 95% author pays model” as well as to simply ask how much money is going in and where it ends up. I’ve been meaning to write about this for ages but a couple of things in the last week have prompted me to get on and do it.

The first of these was an announcement by email [can’t find a copy online at the moment] by the EPSRC, the UK’s main funder of physical sciences and engineering. While the requirement for a two page enconomic impact statement for each grant proposal got more headlines, what struck me as much more important were two other policy changes. The first was that, unless specifically invited, rejected proposals can not be resubmitted. This may seem strange, particularly to US researchers, where a process of refinement and resubmission, perhaps multiple times, is standard, but the BBSRC (UK biological sciences funder) has had a similar policy for some years. The second, frankly somewhat scarey change, is that some proportion of researchers that have a history of rejection will be barred from applying altogether. What is the reason for these changes? Fundamentally the burden of carrying out peer review on all of the submitted proposals is becoming too great.

The second thing was that, for the first time, I have been involved in refereeing a paper for a Nature Publishing Group journal. Now I like to think, like I guess everyone else does, that I do a reasonable job of paper refereeing. I wrote perhaps one and a half sides of A4 describing what I thought was important about the paper and making some specific criticisms and suggestions for changes. The paper went around the loop and on the second revision I saw what the other referees had written; pages upon pages of closely argued and detailed points. Now the other referees were much more critical of the paper but nonetheless this supported a suspicion that I have had for some time, that refereeing at some high impact journals is qualitatively different to what the majority of us receive, and probably deliver; an often form driven exercise with a couple of lines of comments and complaints. This level of quality peer review takes an awful lot of time and it costs money; money that is coming from somewhere. Nonetheless it provides better feedback for authors and no doubt means the end product is better than it would otherwise have been.

The final factor was a blog post from Molecular Philosophy discussing why the author felt Open Access Publishers are, if not doomed to failure, then face a very challenging road ahead. The centre of the argument as I understand it focused around the costs of high impact journals, particularly the costs of selection, refinement, and preparation for print. Broadly speaking I think it is generally accepted that a volume model of OA publication, such as that practiced by PLoS ONE and BMC can be profitable. I think it is also generally accepted that a profitable business model for high impact OA publication has yet to be convincingly demonstrated. The question I would like to ask though is different. The Molecular Philosophy post skips the zeroth order questions. Can we afford high impact publications?

Returning to the RIN funded study and model of scholarly publishing some very interesting points came out [see Daniel Hull’s presentation for most of the data here]. The first of these, which in retrospect is obvious but important, is that the vast majority of the costs of producing a paper are incurred in doing the research it describes (£116G worldwide). The second biggest contributor? Researchers reading the papers (£34G worldwide). Only about 14% of the costs of the total life cycle are actually taken up with costs directly attributable to publication. But that is the 14% we are interested in, so how does it divide up?

The “Scholarly Communication Process” as everything in the middle is termed in the model is divided up into actual publication/distribution costs (£6.4G), access provision costs (providing libraries and internet access, £2.1G) and the costs of researchers looking for articles (£16.4G). Yes, the biggest cost is the time you spend trying to find those papers. Arguably that is a sunk cost in as much as once you’ve decided to do research searching for information is a given, but it does make the point that more efficient searching has the potential to save a lot of money. In any case it is a non-cash cost in terms of journal subscriptions or author charges.

So to find the real costs of publication per se we need to look inside that £6.4. Of the costs of actually publishing the articles the biggest single cost is peer review weighing in at around £1.9G globally, just ahead of fixed “first copy” publication costs of £1.8G. So 29% of the total costs incurred in publication and distribution of scholarly articles arises from the cost of peer review.

There are lots of other interesting points in the reports and models (the UK is a net exporter of peer review, but the UK publishes more articles than would be expected based on its subscription expenditure) but the most interesting aspect of the model is its ability to model changes in the publishing landscape. The first scenario presented is one in which publication moves to being 90% electronic. This actually leads to a fairly modest decrease in costs overall with a total overall saving of a little under £1G (less than 1%). Modeling a move to a 90% author pays model (assuming 90% electronic only) leads to very little change overall, but interestingly that depends significantly on the cost of systems put in place to make author payments. If these are expensive and bureaucratic then the costs can rise as many small payments are more expensive than few big ones. But overall the costs shouldn’t need to change much, meaning if mechanisms can be put in place to move the money around, the business models should ultimately be able to make sense. None of this however helps in figuring out how to manage a transition from one system to another, when for all useful purposes costs are likely to double in the short term as systems are duplicated.

The most interesting scenario, though was the third. What happens as research expands. A 2.5% real increase year on year for ten years was modeled. This may seem profligate in today’s economic situation but with many countries explicitly spending stimulus money on research, or already engaged in large scale increases of structural research funding it may not be far off. This results in 28% more articles, 11% more journals, a 12% increase in subscription costs (assuming of course that only the real cost increases are passed on) and a 25% increase in the costs of peer review (£531M on a base of £1.8G).

I started this post talking about proposal refereeing. The increased cost in refereeing proposals as the volume of science increases would be added on top of that for journals. I think it is safe to say that the increase in cost would be of the same order. The refereeing system is already struggling under the burden. Funding bodies are creating new, and arguably totally unfair, rules to try and reduce the burden, journals are struggling to find referees for paper. Increases in the volume of science, whether they come from increased funding in the western world or from growing, increasingly technology driven, economies could easily increase that burden by 20-30% in the next ten years. I am sceptical that the system, as it currently exists, can cope and I am sceptical that peer review, in its current form is affordable in the medium to long term.

So, bearing in mind Paulo’s admonishment that I need to offer solutions as well as problems, what can we do about this? We need to find a way of doing peer review effectively, but it needs to be more efficient. Equally if there are areas where we can save money we should be doing that. Remember that £16.4G just to find the papers to read? I believe in post-publication peer review because it reduces the costs and time wasted in bringing work to community view and because it makes the filtering and quality assurance of that published work continuous and ongoing. But in the current context it offers significant cost savings. A significant proportion of published papers are never cited. To me it follows from this that there is no point in peer reviewing them. Indeed citation is an act of post-publication peer review in its own right and it has recently been shown that Google PageRank type algorithms do a pretty good job of identifying important papers without any human involvement at all (beyond the act of citation). Of course for PageRank mechanisms to work well the citation and its full context are needed making OA a pre-requisite.

If refereeing can be restricted to those papers that are worth the effort then it should be possible to reduce the burden significantly. But what does this mean for high impact journals? The whole point of high impact journals is that they are hard to get into. This is why both the editorial staff and peer review costs are so high for them. Many people make the case that they are crucial for helping to filter out the important papers (remember that £16.4G again). In turn I would argue that they reduce value by making the process of deciding what is “important” a closed shop, taking that decision away, to a certain extent, from the community where I feel it belongs. But at the end of the day it is a purely economic argument. What is the overall cost of running, supporting through peer review, and paying for, either by subscription or via author charges, a journal at the very top level? What are the benefits gained in terms of filtering and how do they compare to other filtering systems. Do the benefits justify the costs?

If we believe that there are better filtering systems possible, then they need to be built, and the cost benefit analysis done. The opportunity is coming soon to offer different, and more efficient, approaches as the burden becomes too much to handle. We either have to bear the cost or find better solutions.

Science in the open » What is the cost of peer review? Can we afford (not to have) high impact journals?