Alma Swan has an interesting post discussing the value added by the publisher in copy editing and concludes that it is ... variable. She notes a publisher study:

Wates and Campbell looked at copy editing changes carried out on a set of science, humanities and social science articles at Blackwell Publishing (as was) and reported that the biggest category of corrections by the publisher was concerned with the references (42.7% of all copy editing changes), the next biggest category (34.5%) was concerned with minor syntactical or grammatical changes and a small proportion (5.5%) of changes corrected author ‘errors that might otherwise have led to misunderstanding or misinterpretation’. [OptimalScholarship]

I was interested in the attention to references. And I wondered whether the variety of tools introduced in recent years to help with the capture and management of such citation data (RefWorks, Zotero, etc) had reduced the number of errors spotted in a paper's references. It would be interesting to know how the corrections break down, as between errors in bibliographic sources, transcription errors, stylistic or completeness errors, and so on.

In the longer term, it will be interesting to see whether such data flows more easily with the potential introduction of citation microformats (I don't know what the status of this work is), or, say, if it were to happen, the introduction of support in something like Microsoft Word to allow structured data of this sort to be imported or exported. I still believe that we will see greater use made of a new 'bibliographic tissue' which connects the user environment and database resources through resources like citation managers, reading lists, social bookmarking, microformats and RSS feeds.

Incidentally, the discussion of copy-editing is by way of introducing a JISC-funded project looking at differences between versions of articles (different author versions, publisher version):

VALREC will ask stakeholders what levels of validation they would like to see, and what broad categories of differences would be helpful, such as ‘editorial differences’ and ‘content differences’. The project will then develop the technology to measure differences and generate a digital certificate for any article detailing the differences. An example of such a certificate is on the VALREC website. Not only will there then be a means to itemise the exact differences between the author-final and published version, but between other, earlier, versions of an article too, perhaps those first exposed on blogs or wikis. This will permit better formalisation and monitoring of the scholarly record, especially as authors move to early-use of repositories and informal web tools as part of the communications process. [OptimalScholarship]

The project is a joint one between Alma's company, Key Perspectives, which has done a lot of empirical work on open access and researcher behaviors, and the University of Southampton, which has been a major producer of tools, systems and data analysis in support of open access directions (see, for example, the eprints.org site).

Related entries:

Comments: 1

Jul 08, 2007
Neil Dickson

Lorcan,
The comments about large commercial publishers focusing their copy editing on references is entirely accurate. With the introduction of full text electronic publishing the need to create linkable references became a very high priority. This was coupled with publishers concern to make CrossRef a viable success. Then the ability to resolve links to PubMed and other sources made editing references nearly the top priority.

Using more and better reference creation tools won't necessarily bring about a reversion to a greater focus on the text until publishers see for themselves an overall improvement in manuscript copy.

Future developments coming from e-science by way of better authoring environments and workbenches with automatic reference creation, validation and checking will all be aimed at producing higher quality electronic manuscripts.