• If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • Stop wasting time looking for files and revisions. Connect your Gmail, DriveDropbox, and Slack accounts and in less than 2 minutes, Dokkio will automatically organize all your file attachments. Learn more and claim your free account.

View
 

Online reputation and reporting

Page history last edited by n.jacobs@... 11 years, 4 months ago

-    A researcher wishes to use their research paper and associated data (citation, usage, links, etc) to manage and improve their online reputation, adopting tactics to have themselves represented well in key web services (databases, blogs, social networking sites).  They look to repositories as a tool to help them.

-    A research manager (in a university or funding organisation perhaps) needs to compile a report identifying all the outputs from a set of projects, together with measures of their impact (citations, usage, etc).  Many of these outputs are in repositories, often duplicated across the web.

Potential components include:

 - etc

Comments from previous version of wiki

 

 

22 Dec

leo waaijers says:

A funder wishes to mandate Open Access to the publications that result from research funded by them. The availabilty of two elements is critical: (1) an open harvestable repository and (2) a non-proprietary review system. The first issue is addressed in use case 2.

In this use case we could discuss the second issue. Should funders remain passive with respect to this issue i.e. follow the market and allow opt-outs from the mandate? Or should they become pro-active e.g. by tendering non-proprietary review systems? What are the requirements for such a system? Could Knowledge Exchange organise the tender on behalf of a consortium of OA funders?   

 

      21 Jan

      Jim Downing says:

      Leo, I'm unsure why the non-proprietary review system is necessary - could you explain for me please?

 

            30 Jan

            leo waaijers says:

            Jim, the main problem with the proprietary review systems - where you have to assign your copyrights in exchange for publication - is the uncertainty about the right to deposit your article in a repository. A visit to the RoMeo/Sherpa website is enough to discover the complete copyright mess that publishers have created here. The only constant is an almost universal ban on making the published article openly accesible. But that's exactly what is needed! All other routes to open access are surrogates; may be necessary for the time being, but nevertheless.

 

            In a non-proprietary review system you pay for publication but you retain all your rights. I think we should take every opportunity to stimulate this approach. Now that the EC and other funders have engaged in Open Access - meaning that they are prepared to pay for the publication fee - they should also stimulate the availability of non-proprietary review system. Otherwise they sell empty cartridges. In fact that is what they do currently. For example, the EC tells authors to "make their best efforts to negotiate copyright and licensing conditions that comply with the open access pilot in FP7" (quoted from their promotional Open Access leaflet). Instead they should further the realization of non-proprietary review systems by tendering these systems and inviting publishers to submit proposals for them.

 

            My proposal is that we take the opportunity of having also research funders, incl. the EC, around the table in our forthcoming workshop to discuss this approach. 

 

                  04 Feb

                  Jim Downing says:

                  Leo, thanks for that, I think I understand better. Non-proprietary review systems as you define them do exist in traditional publishers. I'm not well informed as to how common it is, but for example, the Royal Society of Chemistry do not require a copyright transfer. They do, however, have a license that covers their edited copy of the paper (the author's copy is not covered by this agreement). In your eyes, is this a workable minimum, or is full open access (without wishing to get into a discussion of the semantics of that phrase) the only answer?

 

                  There's something worrying about the idea of funders controlling the review and editorial system - like not separating the judiciary and executive in a government.

 

                  I completely agree with the idea of having research funders (and also those who direct research assessment) around the table.

 

                        11 Feb

                        leo waaijers says:

                        "... is full open access the only answer?" In my opinion full open access is not the only answer. However, I know that authors are hesitant about circulating their manuscripts (even post-scripts) for two reasons: (1) copyright intransparency (RoMeo/SHERPA is more a demonstration of how complicated it is than a solution to that), (2) loss of official citations (citations of freely circulated manuscripts are not counted by WoS or Scopus).

 

                        "... something worrying about the idea of funders controlling the review and editorial system ..." I did not mean that funders should control the review and editorial system. At least not more than they 'control' research as well. After all, they set the criteria for financing research and subsequently they grant project proposals. Basically, they outsource the research that they want to have executed. Similarly, they could set generic criteria for peer review systems (i.e. being non-proprietary, independent, swift etc.) and grant proposals for such systems (e.g. of publishers). The fact that they pay for both components ('executive' research and 'judiciary' peer review) does not mean an intermingling of both. It is like the government paying both the judiciary and executive yet keeping them independent.          

 

26 Dec

Alma Swan says:

The University of Pretoria has assigned a number of library staff members (I think it was 19) to create citation links for articles in the repository: that is, linking all the references in those articles to the original article or a copy in an OA repository elsewhere. Is this likely to be a one-off, or is it scalable? Does anyone know of any other institutions or organisations doing this or contemplating doing so? Or projects looking into it?

 

06 Jan

Neil Jacobs says:

I'm not sure whether the manual approach is scalable.  There are existing tools and activities of course (Web of Science, Scopus, CrossRef, ToC services, etc) that might be built on, if suitable business models became available.  Alternatively there are more 'webby' components such as the CLADDIER trackback tool [see report here (Word doc) and DSpace software here]. and also VALREC.  Does Academia.edu have a potential role?

 

06 Jan

Neil Jacobs says:

I guess usage statistics are also relevant to these use cases.  Worth noting the MESUR work of course, and PIRUS for Counter-compliant article-level statistics from publishers and repositories.  The JISC Usage stats review gives a good basis for work on this, being taken forward also in Germany by  the OA-Statistik project.

 

21 Jan

Jim Downing says:

An open question: How comprehensive would open citation data have to be to deliver reliable metrics? To deliver usable metrics?

 

      30 Jan

      Neil Jacobs says:

      I suspect that depends what you want to use them for.  If for allocating funding, then they need to be pretty comprehensive.  If being used as an indicator of online impact, or somesuch ?, then maybe less so (maybe).  But it's a socio-pschological question too - how did the h-factor become accepted? What did it have that made it compelling and attractive?  Was it comprehensiveness?

 

            04 Feb

            Jim Downing says:

            "how did the h-factor become accepted?"

 

            If that was meant to be rhetorical then I'm afraid I'm in the dark, illumination would be appreciated! I suspect that what made it compelling and attractive to those assessing research was the relatively low cost. Did h-factor become popular with researchers before being used to measure research?

 

                  12 Feb

                  Tim Brody says:

                  As it was described to me the "h-factor" just 'works'! I would postulate a couple of important facets to the h-factor that are critical for acceptance:

 

                  1) It's easy for the end-user to calculate from ISI WoS (or any listing of papers citation-count sorted)

 

                  2) The algorithm involved is understandable by non-stats specialists

 

                  3)  It tends to factor-out outliers (the super-cited & long-tail), which is a chronic problem with any population-count metric

 

                  Of course, that the original h-factor paper is OA probably helped as well

 

21 Jan

Jim Downing says:

Regarding the first use case, I believe there is a great opportunity for institutions to provide better data management for researchers, tuned to their specific needs. The win for the researcher is that it becomes quicker to write manuscripts if the supporting information is already checked, assembled and formatted, and can be sent to both the publisher and the repository with a click.

 

What perhaps stands in the way of this is looking at it from a repository-centric view, which tends to concentrate on short term payback for the institution and long term payback for the researcher, whilst placing the burden of effort on the researcher. The focus has to shift from "making repository deposit easier" to "making research easier (and depositing data on the way)".

 

 

21 Jan

Jim Downing says:

The second use case partly refers to the potential virtuous cycle of funders supporting institutions by making and strengthening their OA mandates, and in turn being supported in doing so by the repositories.

 

 The very simple measure noted in the use case of ensuring that funder id and grant id metadata is captured is an appealing one, but how could this be implemented without considerable manual effort on the part of the repository and / or placing another barrier in the way of self-deposit? There is a comparatively tractable technical problem too: making sure the funder and grant metadata can be crosswalked between formats without loss.

 

 It's worth noting here that mandates must always be created without teeth, and that the cycle of strengthening mandate, defining compliance, auditing compliance, policing non-compliance will take some time - it cannot be assumed that all funders with an OA mandate will be crying out for this immediately. There are encouraging signs from Wellcome though: http://www.wellcome.ac.uk/About-us/Publications/Newsletters/Grantholders-newsletter/WTX052748.htm - not currently operational...!

 

 

04 Feb

Ronald Ham says:

What I am wondering about what is needed for on line reputation in itself. How does a researcher get a reputable status? Is this something he or she can do her self or is it only possible to get a higher status if other researchers recommend a certain researcher or his work or for that matter whether they refer to the work of the researcher?

 

I believe one of the key elements in this use case would have to be the metrics/system or algorithm that are used for calculating or forming the online reputation of the researcher.

 

If a repository were to help the on line reputation of a researcher, what information should the repository provide?

 

      05 Feb

      James Farnhill says:

      Is the NAMES project one of the possible elements of reputation, in that it allows unique and reliable identification of individuals and institutions?  Also, Lawrie Phipps and I ran a session on reputation at last year's Next Generation Environments that could have useful input with regard to reputation management online - I think it is about much more than just the researcher's research outputs, even though those are important.  See http://james.jiscinvolve.org/2008/05/06/identity-management-at-nge-2008/.

 

10 Feb

Keith G Jeffery says:

Concerning online evaluation then clearly metadata with formal syntax and defined semantics is needed.  For example the link publication to publication (i.e. X to Y) should have sematics not just 'cited' or 'referenced' but 'positively cited for support' or 'negatively cited by proof'.  Colleagues at CEMI-RAS in Moscow are working on this within CERIF (www.eurocris.org/cerif )

 

For me putting grant ref id in the publication metadata is crazy; the link is semantically the other way round (the publication - or research dataset - results from the grant led by person P named project PR at organisation O using teams M and N both of which are part of O with facility F (e.g. CERN) and experiment E (e.g. ATLAS).  This follows the natural workflow of a research organisation and incremental build of the research information.

 

Neil states that the manual approach to citations is not scalable and I agree.  For automation we need metadata that is not only machine readable but also machine understandable.  Working from DC (or even MODS) won't scale either.

 

Finally the contextual metadata is required to assist in any disambiguation (is this the Person Chris Jones who worked on Project PR while employed by organisation O and working in team M and published that paper X positively cited for support by Y....

 

 

11 Feb

Ian Mulvany says:

What the metric should be, how it is picked up and used, and how easy it is to generate are issues that can all be approached separately. I'll just comment here on the last of these. The web enables us to track not just contributions in the form of academic output, but also more ephemeral output. That I am contributing here in this medium is something the web has enabled, and this contribution, small though it is, could be tracked and rolled into a measure space.

 

It's not easy though, to track blog posts, tweets.

 

There is an interesting project providing a semantic markup for online discussions. Semantically-Interlinked Online Communities http://sioc-project.org/. This kind of approach, I think, may have a relationship to the issue of generating researcher metrics at some point.

 

12 Feb

Tim Brody says:

Another use-case: (As I've just talked to a guy from Mendeley - www.mendeley.com) A researcher wants to find potential collaborators based on their own research interests/outputs

Comments (0)

You don't have permission to comment on this page.