Background The Principles of Open Scholarly Infrastructure (POSI) provides a set of guidelines for operating open infrastructure in service to the scholarly community. It sets out 16 points to ensure that the infrastructure on which the scholarly and research communities rely is openly governed, sustainable, and replicable. Each POSI adopter regularly reviews progress, conducts periodic audits, and self-reports how they’re working towards each of the principles.
In 2020, Crossref’s board voted to adopt the Principles of Open Scholarly Infrastructure, and we completed our first self-audit.
In June 2022, we wrote a blog post “Rethinking staff travel, meetings, and events” outlining our new approach to staff travel, meetings, and events with the goal of not going back to ‘normal’ after the pandemic. We took into account three key areas:
The environment and climate change Inclusion Work/life balance We are aware that many of our members are also interested in minimizing their impacts on the environment, and we are overdue for an update on meeting our own commitments, so here goes our summary for the year 2023!
Metadata is one of the most important tools needed to communicate with each other about science and scholarship. It tells the story of research that travels throughout systems and subjects and even to future generations. We have metadata for organising and describing content, metadata for provenance and ownership information, and metadata is increasingly used as signals of trust.
Following our panel discussion on the same subject at the ALPSP University Press Redux conference in May 2024, in this post we explore the idea that metadata, once considered important mostly for discoverability, is now a vital element used for evidence and the integrity of the scholarly record.
For the third year in a row, Crossref hosted a roundtable on research integrity prior to the Frankfurt book fair. This year the event looked at Crossmark, our tool to display retractions and other post-publication updates to readers.
Since the start of 2024, we have been carrying out a consultation on Crossmark, gathering feedback and input from a range of members. The roundtable discussion was a chance to check and refine some of the conclusions we’ve come to, and gather more suggestions on the way forward.
When someone links their data online, or mentions research on a social media site, we capture that event and make it available for anyone to use in their own way. We provide the unprocessed data—you decide how to use it.
Before the expansion of the Internet, most discussion about scholarly content stayed within scholarly content, with articles citing each other. With the growth of online platforms for discussion, publication and social media, we have seen discussions extend into new, non-traditional venues.
Crossref Event Data captures this activity and acts as a hub for the storage and distribution of this data. An event may be a citation in a dataset or patent, a mention in a news article, Wikipedia page or on a blog, or discussion and comment on social media.
How Event Data works
Event Data monitors a range of sources, chosen for their importance in scholarly discussion. We make events available via an API for users to access and interpret. Our aim is to provide context to published works and connect diverse parts of the dialogue around research. Learn more about the sources from which we capture events.
The Event Data API provides raw data about events alongside context: how and where each event was collected. Users can process this data to suit their requirements.
What is Event Data for?
Event Data can be used for a number of different purposes:
Authors can find out where their work has been reused and commented on.
Readers can access more context around published research, including links to supporting documents and commentary that aren’t in a journal article.
Publishers and funders can assess the impact of published research beyond citations.
Service providers can enrich, analyze, interpret and report via their own tools
Data intelligence and analysis organisations can access a broad range of sources with commentary relevant to research articles.
Anyone can contribute to Event Data by mentioning the DOI or URL of a Crossref-registered work in one of the monitored sources. We also welcome third parties who wish to send events or contribute to code that covers new sources. Learn more about contributing to or using Crossref Event Data.
Agreement and fees for Event Data
Event Data is a public API, giving access to raw data, and there are no fees. In the future we will introduce a service-based offering with additional features and benefits. Learn more about the Event Data terms.
What is an event?
In the broadest sense, an event is any time someone refers to a research article with a registered DOI anywhere online. Ideally we would capture all events, but there are limitations:
We can’t monitor the entire Internet, and instead check sites that are most likely to discuss academic content. There are still venues that could be relevant and that we do not cover yet.
Users online refer to academic content in different ways, sometimes using the DOI but more often using the URL or just the article name. We try to decode mentions of DOIs or a publisher website to get a match to an article but it isn’t always possible. This means we may miss mentions of an article even from sources we are tracking.
At present we are not able to track events where no link is included and only the title or other part of the metadata is mentioned.
For Crossref Event Data, an event consists of three parts:
A subject: where was the research mentioned? (such as Wikipedia)
An object: which research was mentioned? (a Crossref or DataCite DOI)
A relationship: how was the research mentioned? (such as cites or discusses)
We determine the relationship from the source of the event, it is an indication of how the subject and object are linked based on broad categories.
Software called agents collect events from various data sources. Most agents are written and operated by Crossref with some code written by our partners. Possible events are passed to the percolator software, which tries to match the event with an object DOI. This process is fully automated.
We perform periodic automated checks to the integrity of the data and update event types. Deduplication is also part of the process performed by the percolator.
To provide transparency, we keep an evidence record about how we matched the object to the subject. Learn more about transparency in Event Data, including links to the open source code and data.
The following agents currently collect data:
Agent/Data source
Event type
Crossref metadata
Relationships and references to datasets and DOI registration agencies other than Crossref (e.g. DataCite)
DataCite metadata
Links to Crossref registered content
Faculty Opinions
Recommendations of research publications
Hypothes.is
Annotations in Hypothes.is
Newsfeed
Discussed in blogs and media
Reddit
Discussed on Reddit
Reddit Links
Discussed on sites linked to in subreddits
Stack Exchange Network
Discussed on StackExchange sites
Wikipedia
References on Wikipedia pages
Wordpress.com
Discussed on Wordpress.com sites
Patent Event Data was historically collected from The Lens. Events from Twitter were collected until February 2023, note that all Twitter events have been removed from search results in accordance with our contract with Twitter; see the Community Forum for more information.
What Event Data is not
By providing Event Data, Crossref provides an open, transparent information source for the scholarly community and beyond. It is important to understand, however, that it may not be suitable for all potential users. Here are some of the limitations:
It is not a service that provides metrics, collated reports, or offers data analysis.
Crossref does not build applications or website plugins for Event Data, for example for displaying results on publisher websites. We do, however, welcome third parties who wish to develop such platforms.
Event Data collection is fully automated and therefore may contain errors or be incomplete, we cannot provide any guarantees in this regard and users must assess the quality of the data required for their particular use case. There may also be delays between an event occurring and it appearing in Event Data.
Events might be missed due to the limitations of the collection algorithms we use. There is also a small possibility that we link an event to the wrong object.
Event Data does not cover every source of academic discussion. In some cases this is because there is no public access to the data; in others it is because we have not had the capacity to build an agent.
While we hope the data is useful for many purposes, we encourage users to be responsible and exercise caution when making use of Event Data.