Dominika Tkaczyk

Dominika Tkaczyk

Director of Technology

Biography

Dominika joined Crossref in 2018 as a Principal R&D Developer, where she focused on metadata matching research aimed at enriching the scholarly record through the discovery of new relationships. In 2024, she became Crossref’s Director of Data Science and established the Data Science team, with a mission to explore innovative ways of using data to support the scholarly community, enrich the Research Nexus with more metadata and relationships, and develop collaborations with like-minded community initiatives. Since 2025, Dominika has served as Director of Technology, leading a unified technology team that integrates infrastructure, software development, and data science functions. Dominika holds a PhD in Computer Science from the Polish Academy of Sciences. Prior to joining Crossref, she she was a researcher and a data scientist at the University of Warsaw, Poland, and a postdoctoral researcher at Trinity College Dublin, Ireland.

ORCID iD

0000-0001-5055-7876

Dominika Tkaczyk's Latest Blog Posts

Piecing together the Research Nexus: uncovering relationships with open funding metadata

RocĂ­o Gaudioso Pedraza, Wednesday, Oct 1, 2025

In CommunityFundingGrant Linking SystemMetadata Matching

Leave a comment

The Crossref Grant Linking System (GLS) has been facilitating the registration, sharing and re-use of open funding metadata for six years now, and we have reached some important milestones recently! What started as an interest in identifying funders through the Open Funder Registry evolved to a more nuanced and comprehensive way to share and re-use open funding data systematically. That’s how, in collaboration with the funding community, the Crossref Grant Linking System was developed. Open funding metadata is fundamental for the transparency and integrity of the research endeavour, so we are happy to see them included in the Research Nexus.

Data Science @Crossref

Dominika Tkaczyk, Monday, Jul 7, 2025

In Data Science

Leave a comment

To address the growing scale and complexity of scholarly data, we’ve launched a new data science function at Crossref. In April, we were excited to welcome our first data scientists, Jason Portenoy and Alex BĂ©dard-VallĂ©e, to the team. With their arrival, the Data Science team is now fully up and running. In this blog post, we’re sharing our vision and what’s ahead for data science at Crossref.

Meet six winners of the first ever Crossref Metadata Awards

Kornelia Korzec, Wednesday, May 7, 2025

In MetadataCommunity

Leave a comment

Marking our 25th anniversary, we launch the Crossref Metadata Awards to emphasise our community’s role in stewarding and enriching the scholarly record.

We are pleased to recognise Noyam Publishers, GigaScience Press, eLife, American Society for Microbiology, and Universidad La Salle Arequipa Perú with the Crossref Metadata Excellence Awards, and Instituto Geologico y Minero de España wins the Crossref Metadata Enrichment Award. These inaugural awards highlight the leadership of members who show dedication to the best metadata practices.

Metadata matching: beyond correctness

Dominika Tkaczyk, Wednesday, Jan 8, 2025

In MetadataLinkingMetadata MatchingData Science

Leave a comment

Crossref logo icon https://doi-org.turing.library.northwestern.edu/10.13003/axeer1ee

In our previous entry, we explained that thorough evaluation is key to understanding a matching strategy’s performance. While evaluation is what allows us to assess the correctness of matching, choosing the best matching strategy is, unfortunately, not as simple as selecting the one that yields the best matches. Instead, these decisions usually depend on weighing multiple factors based on your particular circumstances. This is true not only for metadata matching, but for many technical choices that require navigating trade-offs. In this blog post, the last one in the metadata matching series, we outline a subjective set of criteria we would recommend you consider when making decisions about matching.

How good is your matching?

Dominika Tkaczyk, Wednesday, Nov 6, 2024

In MetadataLinkingMetadata MatchingData Science

Leave a comment

Crossref logo icon https://doi-org.turing.library.northwestern.edu/10.13003/ief7aibi

In our previous blog post in this series, we explained why no metadata matching strategy can return perfect results. Thankfully, however, this does not mean that it’s impossible to know anything about the quality of matching. Indeed, we can (and should!) measure how close (or far) we are from achieving perfection with our matching. Read on to learn how this can be done!

How about we start with a quiz? Imagine a database of scholarly metadata that needs to be enriched with identifiers, such as ORCIDs or ROR IDs. Hopefully, by this point in our series this is recognizable as a classic matching problem. In searching for a solution, you identify an externally-developed matching tool that makes one of the below claims. Which of the following would demonstrate satisfactory performance?

Read all of Dominika Tkaczyk's posts »