PKP 2017 Sprint Report: Internal Statistics
During the 2017 PKP Conference Sprint, our group was responsible to propose enhancements and improvements to the internal workflow statistics of OJS. Our group included Juan Pablo Alperin, Alex Mendonça, Israel Cefrin, Joelle Hatem and had collaborations from Alice Meadows and Ina Smith.
Journal editors often have a hard time extracting internal statistics from OJS. Important data is being collected in the workflow but are not included in the reports. Pre-filtering and date range is something that the system lacks as well, resulting in Editors having to work with massive raw data reports in CSV and then using spreadsheet software such as Excel. Also, some statistics may provide faulty data because of the way they were originally conceived.
Internal statistics are crucial to contribute to the improvement of editorial workflow management and report on specific needs, assisting internal evaluations, account reporting to funding agencies and reporting the journal’s readership about its editorial performance. It’s like a journal’s “healthcheck”.
The group’s goals were to identify desirable statistics to be reported from journals, improve the journal-level statistics reports, provide more options, and identify statistics that are not being produced correctly.
We also identified the need for a more visual display of information. Instead of just tables, maybe a few charts with interactive comparisons in a range time as well some filtering option could help editors have a broader view of their journal’s workflow — something like a dashboard. There are a few public libraries that could be used for that purpose, such as HighCharts or a concept similar to DashBuilder.
Last but not least, the group set to identify what was causing the faulty data in some of the statistics, more specifically, those related to the editorial decision on a manuscript.
Two reports were suggested:
- Submissions Report, which would provide detailed statistics about manuscripts and their final decisions.
- Reviewer Report, which would provide detailed statistics on reviewer activities.
The group came up with a table listing the minimum required fields, filters and date ranges for each report, the priority for each field in a scale from 1 to 3, and whether the field was already available in the existing OJS reports.
Given the expertise on the team and the time available, we were not able to code our suggested changes during the Sprint. Our group focused on documenting the requirements that would serve as the foundation for the coding phase. After that, some testing will be done to ensure the reports are pulling out the correct, expected data.
We believe that these statistics will benefit a wide groups of journals using OJS by providing Editors an insight of what’s happening in their journals.