We were delighted to engage with over 200 community members in our latest Community update calls. We aimed to present a diverse selection of highlights on our progress and discuss your questions about participating in the Research Nexus. For those who didn’t get a chance to join us, I’ll briefly summarise the content of the sessions here and I invite you to join the conversations on the Community Forum.
You can take a look at the slides here and the recordings of the calls are available here.
We have some exciting news for fans of big batches of metadata: this year’s public data file is now available. Like in years past, we’ve wrapped up all of our metadata records into a single download for those who want to get started using all Crossref metadata records.
We’ve once again made this year’s public data file available via Academic Torrents, and in response to some feedback we’ve received from public data file users, we’ve taken a few additional steps to make accessing this 185 gb file a little easier.
In 2022, we flagged up some changes to Similarity Check, which were taking place in v2 of Turnitin’s iThenticate tool used by members participating in the service. We noted that further enhancements were planned, and want to highlight some changes that are coming very soon. These changes will affect functionality that is used by account administrators, and doesn’t affect the Similarity Reports themselves.
From Wednesday 3 May 2023, administrators of iThenticate v2 accounts will notice some changes to the interface and improvements to the Users, Groups, Integrations, Statistics and Paper Lookup sections.
We’ve been spending some time speaking to the community about our role in research integrity, and particularly the integrity of the scholarly record. In this blog, we’ll be sharing what we’ve discovered, and what we’ve been up to in this area.
We’ve discussed in our previous posts in the “Integrity of the Scholarly Record (ISR)” series that the infrastructure Crossref builds and operates (together with our partners and integrators) captures and preserves the scholarly record, making it openly available for humans and machines through metadata and relationships about all research activity.
Crossref is proposing a process to support the registration of content—including DOIs and other metadata—prior to that content being made available, or published, online. We’ve drafted a paper providing background on the reasons we want to support this and highlighting the use cases. One of the main needs is in journal publishing to support registration of Accepted Manuscripts immediately on or shortly after acceptance, and dealing with press embargoes.
Proposal doc for community comment
We request community comment on the __proposed approach as outlined in this report.
Some examples of what we’d like to know:
Are you aware of the issues outlined in this proposal?
Are you aware of the funder and institutional requirements for authors to take action on acceptance of manuscripts for publication in journals?
Do you think the proposed solution and workflows are reasonable?
Are you likely to update your workflow to register content early?
If you are likely to update your workflow, how long do you estimate it will take?
Any other general comments, questions or feedback on anything raised in this document.
Please send comments, feedback and questions to me, Ginny, at feedback@crossref.org. The deadline for comments is February 4th. Thanks!