Edit

Open Science at EMBL

For a positive culture change in life science research

Mahnoor Zulfiqar: Tackling the bottleneck in metabolomics with a FAIR computational workflow.

Quick take summary

Metabolomics studies small molecules (metabolites) involved in metabolic pathways.

Workflows are repeatable processes that convert input data into desired outputs, essential for handling large datasets.

Widespread adoption of FAIR workflows could improve reproducibility and therefore public trust in science.

Challenge 1

A major bottleneck in metabolomics is the identification of unknown compounds, which limits understanding their structure and function.

Solution 1

Mahnoor uses mass spectrometry and computational tools to analyze metabolites from marine organisms, which can lead to drug discovery.


Challenge 2

    • Metabolomics generates massive datasets, up to to tens of thousands of features(reads) per study.
    • A reproducible workflow is needed to improve confidence in metabolite annotations(compound identification).

Solution 2

Mahnoor developed a computational workflow to integrate multiple tools for metabolite identification, combining spectral databases and in silico predictions.


Challenge 3

    • Mahnoor’s workflow is designed to be interoperable across different systems (e.g., Linux, Windows, Mac) and reusable by other researchers.
    • Integrating tools written in different programming languages (e.g., R and Python) was a significant challenge.

Solution 3

    • Mahnoor used Docker containers and Common Workflow Language (CWL) to improve interoperability and reproducibility.
    • Mahnoor emphasized the importance of community support and collaboration in learning and implementing FAIR principles.
    • Resources like GitHub, Docker, and CWL were instrumental in her process.

Helpful resources

Workflow relevant

  • Workflow Community Initiative: workflows community (including users, developers, researchers, and facilities) to enable scientists and workflow systems developers to discover software products, related efforts, events, technical reports, etc. and engage in community-wide efforts to tackle workflows grand challenges.
  • WorkflowHub.eu: Registry for workflows, describe, share and publish workflows developed in any workflow language and workflow management systems. WorkflowHub aims to facilitate discovery and re-use of workflows in an accessible and interoperable way. This is achieved through extensive use of open standards and tools, including CWL, RO-Crate, Bioschemas and GA4GH’s TRS API, in accordance with the FAIR principles.
  • FAIR data standards published in 2016: The FAIR Guiding Principles for scientific data management and stewardship
  • FAIR minimum standards for workflows published in 2020: FAIR for software/workflows
  • New FAIR computational Workflow guidelines can be found here: Applying the FAIR Principles to computational workflows

FAIR relevant

  • Findable: Ontologies such as EDAM ontology, WorkflowHub, Bioschemas Profile for ComputationalWorkflows, Usage of identifiers, descriptive metadata and keywords
  • Accessible: Using Standardized protocols such as HTTPS (for web access) and an API (for programmatic access), metadata availability and maintenance
  • Interoperable: Use of standardised and open formats for data and standard descriptive language such as CWL for workflows, software containers such as Docker for Operating System interoperability
  • Reusable: Licensing, Domain relevant metadata, provenance for reuse e.g: via Workflow RO-Crate

Metabolomics Relevant

Transcript

00:00:19:17 – 00:00:45:23
Victoria: I’m your host, Victoria, and I’m the Open Science specialist at the EMBL. Joining me today is my guest, Dr. Mahnoor Zulfiqar, who is a post-doctoral fellow at EMVL and an expert in the fair principles and computational workflows. I’m very excited to talk to you, Mahnoor. And that’s because when we were looking for potential podcast guests, your name was mentioned as a real expert who actually applied open science to their work.

00:00:46:00 – 00:01:10:00
Victoria: I’m really looking forward to learning from you, your hands on experience working on a project where you made a computation workflow fair. And first, I want to invite you to tell us a little bit about yourself, your research, and actually, how did your interest in your research develop?

Mahnoor: Thank you, Victoria, for inviting me and giving me this opportunity to talk about my work, on Open Science.

00:01:10:02 – 00:01:50:04
Mahnoor:
So, just a little bit about myself, my research interests, specifically in bio informatics. It started from, my bachelors where I was actually studying, biochemistry. And I realized that, I realized the importance of computational analysis, in advancing biological research. And then I shift, in my, in my master’s, then I moved towards space and moved towards informatics as my major and, did some computational aided drug discovery, from natural products.


So this was a bit of my trajectory towards my PhD, where, I was, again, working with natural products, but more from the marine environment and applying, metabolomics, can get informatics, towards that project. So it was not a very, direct, interaction for myself, towards by informatics or chem informatics, but it developed over time where I was first introduced to informatics, then to natural products.

00:02:19:21 – 00:02:44:06
Victoria:
And from natural products to metabolomics and chem informatics. That sounds really cool. And I just want to break that down a little bit more. By natural products, you mean chemicals or, compounds that exist in the natural world. Right. And coming from marine sources. Yeah. And so how how would that work? How how do you actually, discover drugs through natural compounds using computational analysis?


I’m curious how you put this all together. So, there are many analytical tools, that measure, the compounds in our surrounding depends on what sample you have. So if you have sample from a marine environment, all the different components that are released from these marine organisms, be it microorganisms are, macro organisms. All these can be then detected from these analytical tools.

00:03:15:20 – 00:03:39:16
Mahnoor:
So you collect your sample, you run it through, an instrument such as mass spectrometry, which is very, which is the analytical tool to, to measure these natural products in your, in your sample. And then from these tools, we further move on to, analyze the data that we have collected, that we have measured through the computational analysis.

00:03:39:16 – 00:04:01:03
Victoria:
And this is where the metabolomics workflows also come into place. I understand a little bit better now. So you also mentioned, of course, metabolomics. So can you tell me a little bit more about what that is? And maybe a I know that it’s one of the more newer, and newer fields in the omics. Could you tell us a little bit more about this?

00:04:01:05 – 00:04:30:04
Mahnoor:
Yes. So metabolomics is, in simple words is study of small molecules, that are small biomolecules such as fats, are amino acids or or nucleotides, all these different small molecules that make up our body or that are of course, since I also mentioned marine organisms that are present in marine organisms or any other, living organisms.


And these, these, these small molecules are termed as metabolites. They are involved in different metabolic pathways that are very essential for, for, for life. Yeah. And, the study of these small molecules is then termed as metabolomics. And it’s relatively newer than other omics. Other omics, technology. And at least since the past two decades, it has seen some advancements.

00:05:01:24 – 00:05:24:22
Victoria:
So I can imagine the volume of data that you’re collecting is massive, right? Yeah. Can you tell us about, like, more or less the order of how many compounds are we looking at for sample just to get an idea? It really depends. I would say on from sample to sample. Also depends on whether you are doing very targeted study or if you are doing untargeted study.

00:05:24:24 – 00:05:56:02
Mahnoor:
So let’s say, it can range from thousand features to more more than 10,000 features. So it depends on really the sample size. Right. So you can extract as many features, as are present in the sample. And here by features I mean, the metabolites because the individual compounds are compound compounds, although they’re, they could be present in multitude.


So the same metabolite could be present, as many features. But again, since, this will be more technical. So here I just mentioned that, these are the range of, metabolites and, and targeted study then. Yeah. I want to ask you about what is the type of sample size that we’re talking about. You can tell me maybe per sample or actually, what is it?

00:06:21:03 – 00:06:47:13
Victoria:
How much data do you get for entire project? So, the, the, the sample size, is also relevant to how many features we extract from each sample.

Mahnoor: Let’s say that, in, in one sample that is extracted from these marine microorganisms, could range from 50 to 100 features in one data file. But it depends.

00:06:47:13 – 00:07:24:23
Mahnoor:
If you increase the amount of, samples that you have collected, the that the amount of features also increases exponentially with that. So it so at the end, specifically if you’re talking in untargeted metabolomics, studies that are more, on collection of, or identification of as many metabolites as possible, the scan these studies are usually high throughput, and we receive a lot of, multitude of metabolic features to then study on or extract the meaningful, structures from them.

00:07:24:23 – 00:07:52:08
Victoria:
Yeah. So I am very curious, what are some of the biggest questions that are being asked in the field, and how can this translate into our everyday uses? So, the research, in metabolomics is, mostly focused on, in the field of medicine, for example, it’s focused on diagnostics and any, targeted therapy based on the, the, the metabolome, the metabolic profile of the individuals.

00:07:52:08 – 00:08:19:07
Mahnoor:
Right. But it is also, there are also research projects specific for, toxins that are present in our environment or in the agriculture or, as well as it’s also, used to identify novel compounds such as, as mentioned before, natural products. So it’s also, the applications are also, they are for the discovery of new compounds.

00:08:19:07 – 00:08:39:19
Victoria:
Right. And this kind of move towards your or your work previously, computational workflows for metabolomics. Yes. Metabolome annotations. So from what I understand from also reading your your article was that this is a computational workflow for identifying those compounds, right? Yes. Yeah. Can you tell us a little bit more about that and why did you develop this workflow.

00:08:39:21 – 00:09:15:06
Mahnoor:
So one of the biggest, limitations or bottleneck in metabolomics still remains that there is so much, so much unknown, so many compounds that are unknown. We don’t know the structure of these compounds. And since we don’t know the structure, we don’t know the function. And, and you receive, as mentioned before, a high throughput data set, you receive a lot of features and individually to assign a structure to the metabolic features that you extract from mass spectrometry can be very cumbersome.


So there are, commercial tools that, usually are used. And then also there are open source tools. So with commercial tools we have issues such as reproducibility of the data. And it’s also, we don’t have formats that are open source. And we cannot utilize the, the results that we use from those, softwares further downstream. And make them reproducible, from the start.


However, as mentioned, as I mentioned, that there are also open source tools that are very specific for this problem, that they take the input files, and try to identify what chemical structures are present in our data set. But again, they’re very specific for this particular task. There are different ways to do that as well.


And these tools have specified those ways. So some tools such as some R packages do a very good job at assigning assigning a chemical structure from a spectra or the, the readings right from the, the data instrument. Yeah. Get exactly the data we get exactly back from mass spectrometer. And these tools use databases. So there are databases where you can submit your mass spectrometry data or your metabolomics data.


And then other people can, who are also doing metabolomics research can then, assign chemical structures based on the similarity of already submitted data to the data they have, in their samples. So this gives a higher confidence of chemical assignment that if you have detected, tyrosine in your data set, based on spectral similarity, based on the similarity between two data points in, your, your data and also in the data that is publicly available in databases, that has more confidence as compared to the other types of tools that don’t rely on these databases, these tools, create an in silico version or, a predicted version of a spectra.

00:11:36:01 – 00:11:59:07
Victoria:
So the spectra doesn’t exist because these spectral databases have 500,000 features, which is way less than the reality I see. So the more data that people can compare with one another using this, we’re using open source tools. We can all increase our confidence in the data that we’re finding. And then the assignment of and then the identification of new compounds is that correct?

00:11:59:07 – 00:12:28:12
Mahnoor:
Yes, exactly. We can also increase the confidence in our, annotations. So, coming back to your question, the the main idea for the workflow was to integrate all these different ways to assign a structure, whether it be a spectral database or an in silico tool, because both of them have their advantages and disadvantages. And if we combine them together, then we have a higher chance for a high confident structure annotation.


And also the workflow takes directly the the input data. There is no extra pre-processing steps that usually are required for these tools. So we wanted to package everything all together for the one purpose of structural annotation. And that’s why we have a tool for this purpose or a workflow. I’m super impressed. I think this is such a great idea.

00:12:54:22 – 00:13:12:09
Victoria:
It makes no sense to use one specific tool. You really should use everything that’s available that you can plug in all together to ask this question. I want to also step back a bit. You know, we were talking about workflows and we’re using it, this term a lot. But I think, you know, workflows is actually a new concept for a lot of people. Can you just tell me a bit more about what our workflows and why are they important?

Mahnoor: Yeah. So, again, workflows are used heavily in research. And I would say that, you can if you just take workflow as a very simple concept, then you can just say that, you know, you start, you have a research project, you have research goals, and then you take different steps, different research steps to reach that goal.


And that’s a workflow already. But more specifically, computational workflows and or how we perceive workflows as is more, towards computational side. So, in that case, a workflow, again, is a series of, different repeatable activities that we execute to, convert our input data set into a desirable output through different steps. So this is a very basic definition of what a workflow is.

00:14:12:22 – 00:14:31:11
Victoria:
Yeah. So if I understand this correctly, it’s like, you know, you just put in the input once and then it should go through all the important steps and it comes out consistently the same way as analyze the same way. So you don’t have to manually move that intermediate across the assembly line. And then the next step and so on.

00:14:31:11 – 00:14:54:16
Mahnoor:
Yeah, I think in a lot of people use this in different ways. Right. And actually I can imagine a lot of manual work in between. Yes, exactly. I mean, while it’s being developed, there is a lot of manual work to make it automated.

Victoria: That’s right. Yeah. So it’s very obvious why, you know, when and when you’re working with tens of thousands of data points, you really need to have this type of setup.

00:14:54:16 – 00:15:17:05
Victoria:
Yes. And going back to what you mentioned earlier, that you were combining different ways of analyzing the mass spec spectral data. So I can imagine that being pretty difficult. And can you tell me a little bit about the challenges of how do you bring in different pieces together? And then, you know, here also, I want to then start asking you about the fair principles.

00:15:17:07 – 00:15:40:20
Victoria:
Yeah. And, can you tell us a little bit about what fairs I know we’re asking you have about some very big concepts here that’s could you tell us a little bit more about how fair came into the picture? And. Yeah, like the history of between fair principles and computational analysis. I read your paper, and that’s where I learned that the Fair principles are developed in 2016, initially for data.

00:15:40:22 – 00:16:03:19
Mahnoor:
Yeah. So can you tell us a little bit more about kind of like the major challenges that you encountered in making the workflow and even harder, making it FAIR. So I would start this from the beginning of my which was that the first step that I took towards FAIR principles when I didn’t know the concepts so much, was to create a GitHub repository.

00:16:03:19 – 00:16:30:13
Mahnoor:
Yeah. And I don’t even have it open. That’s a great one. Yeah. Step one creation wasn’t very. And then throughout my PhD, my beauty was also part of the, excellence cluster Balance of the Microverse, which is very specific research for microorganisms. And they were also very keen on, implementing FAIR principles and, also good scientific practice.

00:16:30:13 – 00:17:03:20
Mahnoor:
So I was going through these, these different seminars, on, on fair and good scientific practice. And while I was developing workflow, I was keeping it bare minimum, I would say. Yeah, yeah. By bare minimum, I mean that I tried to make the data that I’m using open source, the code that I’m using available on GitHub and also associated with the license on how to reuse the code, how to site it, and so on.


And these were the very basic, verification of the workflow that I was trying to follow. Also, the workflow was not just in one language, it was in two different languages. Yeah, it’s all you had the hard part and the fun part. Yeah. Luckily they were both open, so. Yes. Yeah. But to integrate them together and make the whole, workflow automated was still, one of the triggers, right?

00:17:30:19 – 00:18:10:12
Mahnoor:
Yes. So this challenge together with our, so our group, in Jenna, that’s where I did my PhD. It’s very it’s also, very pro open science. And while we develop our, workflows or softwares, we are also encouraged to follow the fair practices. And so this, this, this motivation, as well as, the idea of trying to resolve this challenge of R and Python being two separate components is how I started to make it even more fair for, okay.

00:18:10:13 – 00:18:36:19
Victoria:
Yeah, for further development. And this is how I. Yeah. So the challenge give you the the motivation. Yes. You have to continue to go more challenges. So how did you resolve this. This r and Python. Yes. Conflict. So, as I mentioned before, one of the minimal requirements that I was fulfilling at the time was also to use a Docker container.

00:18:36:21 – 00:19:04:17
Mahnoor:
Okay. So because you can run Docker containers on, you can containerize your, your whole code into a Docker container. Okay. Together with, the all the important libraries and the requirements, and then you can run it on any of the operating systems. But this, this somehow resolves the issue of interoperability. So but this is not the perfect solution.

00:19:04:17 – 00:19:30:20
Mahnoor: I see, yeah. So in order to, still, it was two components, you know, Docker components. Okay. So you couldn’t put an yeah together. But by having that, containerization allows it to work. Yeah. To, to work and, it on multiple platforms, but it is still two components. I send to resolve this, one of our postdocs in the team, mentioned to use Common Workflow Language or C.

00:19:30:22 – 00:19:56:18
Mahnoor: CWL is a language, a workflow language. Or workflow description, to describe, your workflow in different components.

Victoria: So there you mentioned you have these inputs, you have these different steps within your workflow. And then you have these output. So having that standardized way of describing things actually allow people to read and reuse and understand how other workflows work.

00:19:56:18 – 00:20:17:22
Victoria:
Right. Exactly. Yeah. So of course we we want to talk about, you know, again, the focus of your work was making your workflow FAIR. What is the what type of value it has, to make a workflow FAIR? Of course, a, a not FAIR workflow probably also can analyze the data. So why did you put so much, elbow grease and energy into making a FAIR.

00:20:17:22 – 00:20:57:02
Mahnoor:
So FAIRification makes your workflow, very reproducible. This, the FAIR principles are based on also the fact that reproducibility is one of the basic, stone steps of, science in general. Because if your research is not reproducible, whoever is then building upon, your research cannot be, cannot, work efficiently or, it’s it’s not, it doesn’t make sense to then build upon something that is not reproducible because the results change, with every run.


So one, one of the main aspects of why workflows should be FAIR of I, I tried to make the workflow FAIR was reproducibility. And apart from that, it’s also good for the researcher themselves that they make the workflow fair because it’s, one of the components of fair principles is also findability. So sometimes you produce a very nice workflow, but then it’s not findable over the internet.


You Google it, but you don’t find it in the in the first few pages of, Google search, you know, it’s going to go to the 10th page. You know something? Yeah, definitely. So FAIR principles also assured that it’s findable and, easy. People can easily find it over the internet, too. Yeah. So we talked about the F part of the fair already.


And also if it’s, the second part is the A, which is for accessibility and if your workflow is or if you’re any research object basically is accessible, that means that it can be further built upon. Yeah. Because you have access to the workflow, to the code, to the instructions based on which you know how to reuse, that, that whole workflow.

00:22:15:06 – 00:22:47:13
Mahnoor:
So you have, a proper access, to, to that, particular software or workflow. And then there is interoperability that it, it’s important that the workflow is not just, you know, able to be executed on your own work laptop, but it’s also but it’s important that it can be executed on different, platforms on, high performance computing or on, Linux system, Mac, windows, all these different, operating systems as well.

00:22:47:15 – 00:23:14:13
Mahnoor:
And also that it can be, executed in a different workflow, management system such as Galaxy. So it’s also important that the workflow is interoperable or on all these different platforms. And then of course, the reusability, aspect comes into place when you, have you have developed the workflow. You’re no longer, developing it further.

00:23:14:13 – 00:23:43:04
Mahnoor:
Or if nobody else is also developing it further, at least, the metadata is available and you can also reuse it, and develop the results that the initial data set, which was used to develop the workflow. Well, had the results on that, but also, the new new data that the workflow has not seen should also be reproducible using the same steps.

00:23:43:06 – 00:24:07:08
Mahnoor:
So in this way, FAIRification of the workflow is important. And all these different, components software is also to make sure that, it’s moving in the correct direction and more towards reproducibility in science. Yeah. And it really makes sure that the knowledge lives on. The discovery actually lives on. It doesn’t just kind of becomes a blip in human discoveries.

00:24:07:08 – 00:24:26:22
Victoria:
And then no one else knows what happened. Exactly. Yeah. And I think that’s so important. And you know, from everything that you discussed for getting through each aspect of there is, is it’s a challenge elements. Yeah. And it took a lot of skill sets and learning. Right. So, how did you get these skillsets? You mentioned that you studied bioinformatics and biochemistry.

00:24:27:01 – 00:24:55:02
Victoria
How did you learn all the necessary skills that you need to actually make things FAIR along the way? And what resources were helpful to you?

Mahnoor: I would say that, the best part of for this, this, this smaller project of the PhD to make the workflow fair is that I got to know a lot of people who were very enthusiastic, for this, for, for this topic.


So for, for describing my workflow into. Well, I was not an expert, the postdoc that I was working closely with was also, learning how to produce. Well, but then we happened to meet, the developer of CWL who really helped us. Yeah. You found the. Yeah. Help for themselves? Yeah. Yeah. Great. And, apart from that, there are some important skill sets, of course.

00:25:21:19 – 00:25:48:09
Mahnoor:
You know, programing a little bit so you can make a very simple workflow but then go through, different, websites, for example, that are specific, on how to are guidelines that are specific on how to make your workflow FAIR. However, here, a small point is that, as you know, they are very it’s a new concept in science.


Yeah. In general as well. And there is a lot of information on it. Yeah, but to find which aspects of fair principles, would be suitable for your own project is a bit of a challenge. Yeah. So, yeah. So it’s good to find these guidelines, but also it’s very nice to reach out to the people who are actually involved.

00:26:13:10 – 00:26:34:00
Victoria:
And that’s what I’m also hoping that this, this your experience and your story could be helpful that for this particular use case of developing computational workflow, We would love to provide some links and resources. And yes, you know, at the accompanying pagefor the podcast. Mahnoor: I would be very happy to provide those as well.

00:26:34:02 – 00:26:58:11
Mahnoor:
There are working groups who are particularly working on making computational workflows fair. And all the other aspects. Yeah. Together on the workflow side it will be very useful resource. That’s so interesting. That’s what I really like about the open science and the fair community is that people really help each other and you can become a part of the communities are working on making things better, making things more fair.

00:26:58:13 – 00:27:22:12
Victoria:
FAIR is better. Yes. So, if you haven’t, heard that enough today. Yeah. So I also want to, you know, all of everything and all of the work that you did just, makes things not only open, but more than that, reusable and, by others. And so did you. Did someone else reuse your work that, someone rebuilt our building and build upon it?

00:27:22:14 – 00:27:45:10
Victoria:
And did you receive feedback? How was that like for you? Mahnoor: So as I mentioned that the code is available on GitHub. Yeah. So there are, there are some other researchers who also try to use, the workflow. And of course, it’s not a perfect workflow. So there were some issues. But again, since it’s publicly available, the issues were posted.

00:27:45:12 – 00:28:19:14
Mahnoor:
Some of them I could also resolve, and give them back a feedback on, how to run the workflow for their particular case. So it’s, it’s a good practice, to have, the code open source because then you get feedback directly through. Yeah, GitHub issues. And so there have been a reuse there have not been so much development because I was the core developer, and now I’m here doing my postdoc, which is a little bit, a bit far from, developing the workflow on metabolomics.

00:28:19:16 – 00:28:50:22
Mahnoor:
However, it, it has been used as an example, in the computation computational workflow community. That’s right. Metabolomics data specifically. Yeah. So, yeah, I mean, it’s moving towards that direction, at least for now. Yeah, definitely. And I think all of these things are so concrete and not so much more meaningful than, you know, for example, for publications like a simple citation from from what I hear from you, this is really so much more, you know, knowledge that you contributed that people can build upon.

00:28:51:00 – 00:29:23:17
Victoria:
And also, yeah, getting feedback directly on your work. I think that’s really precious. And so I want to kind of close off with, a question about FAIR principles and FAIR workflows in general. From what you describe about what a workflow is, it really sounds like it could be applied in any research project. Right. And and so if FAIR workflows or more widespread and is more common practice, what type of research could that enable, what can that help us, do in the future?

00:29:23:19 – 00:29:56:08
Mahnoor:
Again, I think that if there are more FAIR workflows, there will be more reproducibility. And also specifically in metabolomics there. It’s, a bit of a challenge. Yeah. And the fair that making computational workflows FAIR would also, would also increase the confidence of other people in, the research itself, or let’s say not in science, but they have an assessment criteria that this is fair.

00:29:56:10 – 00:30:19:20
Mahnoor:
Also. How much FAIR? So I think if we could also, have some public awareness on the FAIR principles. Yeah. That would also increase the faith of, yeah, exactly like in science as well. Yeah. I think that’s, that’s a really big challenge right now. Yeah. You know, if a researcher says, oh, I saw this once, how should you trust it?

00:30:19:23 – 00:30:52:24
Mahnoor:
I, you cannot really trust that 100%. What if it was an artifact or, you know, it’s so important to have replicates and have multiple groups independently reporting the same findings. Yeah. And that’s why it’s so powerful to have reproducibility and have this and enhance the reproducibility in science for the trust of the general public. Yes. Yeah. I think another point is also to follow the standards that are set not only in the metabolomics community, but also in the workflow community, because, right now, you know, there can be many ways to describe a workflow.

00:30:52:24 – 00:31:22:05
Mahnoor:
There can be many ways to interpret your metabolomics results. So it I know it’s very it’s very difficult because research is so broad. Yeah. In general that you cannot apply specific standards per se. But there is minimal set of standards, for each of these categories. Yeah. That could be followed. And I think there is also some initiatives on making, a requirement for your publication for your data to be available.

00:31:22:05 – 00:31:48:09
Mahnoor:
Absolutely. For whether you follow those minimum standards or not. So I think, these are, these are small, initiative, these, these things require relatively smaller effort if it’s a minimal set of standards. But it makes a bigger impact. Yeah. But that again, you have to start from the beginning and you need to, you need to know these, standards.

00:31:48:09 – 00:32:11:19
Unknown
So I think so that way, I think the research also would be important in this case. Yeah, absolutely. Thank you. So much. I think this is just an incredible perspective. I want to thank you, Mahnoor Zulfiqar for sharing your own experience and your insights, and to learn more about applying the FAIRprinciples to computational workflows, please visit the accompanying blog post for this podcast episode.

00:32:11:21 – 00:32:20:02
Thank you for listening to The Knowledge Catalyst, this is your co-host Victoria Yan.  Looking forward to the next chat!

About Mahnoor Zulfiqar

Dr. Mahnoor Zulfiqar is a post-doctoral fellow at EMBL, in the group of Michael Zimmermann. She specializes in FAIR principles and computational workflows.

She has hands-on experience in making computational workflows FAIR (Findable, Accessible, Interoperable, and Reusable).


Interested in speaking on The Knowledge Catalyst?

Please contact Victoria Yan: victoria.yan@embl.de


Listen to this episode on your favourite streaming platforms


Credits

Production: Victoria Yan, Anandhi Iyappan

Audio Technician and Editing: Felix Fischer

Original music: Sergio Alcaide, Felix Fischer

Graphics: Holly Joynes

Web Design: Victoria Yan, Szymon Kasprzyk

Photography: Kinga Lubowiecka

Edit