Online PDR system
The online PDR system for academic staff, in the Research, Teaching and Enterprise or Specialist and Supporting Academic job family, brings together information held in the following systems across the university
LUPIN is a publications database which enables staff and students to store and manage the metadata for all research outputs. LUPIN integrates with the Institutional Repository to provide a single entry point for the University’s metadata and full text research outputs. The published outputs information in the PDR is extracted directly from LUPIN and includes all outputs since 2014. The information on accepted outputs is also extracted from LUPIN.
All current research grants held in an Agresso J code for which you are acknowledged as PI or Co-I will appear in the ‘Research Funding’ section on the online system.
Your personal details and training and development record will be pulled in from iTrent.
In the ‘Research Supervisions’ section, pulled in from LUSI you will see all MPhil or PhD students for whom you are indicated as a supervisor and who have an active status, and in addition any that have completed their studies in the last 12 months.
The publication visibility indicators have remained the same for the 2020 PDR although the date range has increased to include 2014-2019 data. It should be remembered that when the data was extracted the 2019 year was not yet complete, and therefore the publication lists, and the citation indicators will be affected. Once again, we have provided each member of RT&E staff (except those in the former School of Art, English and Drama) and those SSAR/OT staff identified as REF-dependent, with a list of all the outputs allocated to them in SciVal citation benchmarking tool. SciVal is based on the Scopus database of 23,000 journals, 100,000 conferences and 156,000 books. For each output we will provide the following information to enable you to have a detailed conversation about your publication strategy.
1. The in-year SNIP and SJR value (where available). SNIP and SJR are field-normalised journal (or conference) citation indicators. Such indicators can be a helpful indicator of a journal's visibility. A journal or conference with a SNIP value of around 1.0 would be in the top 25% by SNIP; a value of around 1.5 would be in the top 10%. A journal or conference with an SJR value of around 0.75 would be in the top 25%; a value of about 1.4 would be in the top 10%. In the past, a static SNIP or SJR value was provided (the previous year's) regardless of the year of publication. This year SciVal are able to provide the journal's SNIP and SJR value that corresponds with the year the output was published. As 2019 SNIP and SJR values are not yet available, 2019 outputs will not have SNIP and SJR values.
2. Citations. The raw citation count is provided to help identify uncited publications and whether there are any common characteristics amongst uncited publications.
3. Field-Weighted Outputs in Top Citation Percentiles, per percentile. This is an indication of how cited the output is relative to its age, subject area, and document type. The number in this column indicates what citation percentile the output is in, where 1 indicates a high citation rate.
4. Scopus affiliation names and Country. The list of institutions and countries with which you are co-authoring is provided as indication of your collaboration activities. In many disciplines, internationally co-authored papers receive greater visibility and citations.
Publication visibility indicators such as those supplied are only one view of an output's visibility and success. There are others, such as altimetrics, Institutional Repository views and downloads, prizes and awards. A comment box is provided to allow you to supply additional information on the visibility of your research outputs. This will be of particular relevance to staff working in the Arts and Humanities whose outputs are not best served by SciVal data.
It is important to make a note any incorrect data you notice as part of your preparation. This will need to be corrected in the source data, such as LUPIN, Agresso, iTrent or LUSI.
Queries should be directed to the following in the first instance: