{"id":15,"date":"2019-09-18T08:56:19","date_gmt":"2019-09-18T08:56:19","guid":{"rendered":"https:\/\/www.embl.org\/groups\/kreshuk\/?page_id=15"},"modified":"2025-01-24T14:51:01","modified_gmt":"2025-01-24T14:51:01","slug":"home","status":"publish","type":"page","link":"https:\/\/www.embl.org\/groups\/kreshuk\/","title":{"rendered":"Home"},"content":{"rendered":"<div class=\"vf-grid vf-grid__col-3 | vf-u-margin__bottom--800\">\n      <div class=\"vf-grid__col--span-2\">\n      <div class=\"vf-content-hub-html\">\n  <!-- Generated by: http:\/\/content.embl.org\/api\/v1\/pattern.html?filter-content-type=profiles&amp;filter-uuid=f97d9bce-0a98-4250-ab74-de726b969114&amp;pattern=node-teaser -->\n      <div data-embl-js-conditional-edit=\"9302\">\n              <h1 class=\"vf-lede\">The Kreshuk group develops machine learning-based methods and tools for automatic segmentation, classification and analysis of biological images.<\/p>\r\n\n            <a class=\"vf-text vf-text--body-r vf-link embl-conditional-edit\" rel=\"noopener noreferrer nofollow\" href=\"\/node\/9302\" target=\"_blank\">Edit<\/a>\n    <\/div>\n  <\/div>\n    <\/div>\n      <div >\n\n<!-- <style>\n  .vf-content-hub-html {\n    --vf-stack-margin--custom: unset !important;\n  }\n<\/style> -->\n\n    <div class=\"vf-content-hub-html vf-stack vf-stack--600\" data-cache=\"3d5ed75b\">\n      <!-- Generated by: http:\/\/content.embl.org\/api\/v1\/pattern.html?filter-content-type=person&amp;filter-field-value%5Bfield_person_positions.entity.field_position_membership%5D=leader&amp;filter-field-value%5Bfield_person_positions.entity.field_position_team.entity.field_foreignid%5D=509&amp;filter-ref-entity%5Bfield_person_positions%5D%5Btitle%5D=&amp;filter-ref-entity%5Bfield_person_positions%5D%5Bfield_position_primary%5D=1&amp;hide%5Bteam%2Cmobile%2Cphones%5D=1&amp;limit=5&amp;pattern=vf-profile-inline&amp;sort-field-value%5Bchanged%5D=DESC -->\n                \n                            <article class=\"vf-profile vf-profile--very-easy vf-profile--medium vf-profile--inline\" data-embl-js-conditional-edit=\"88034\">\n              <img decoding=\"async\" class=\"vf-profile__image\" src=\"https:\/\/content.embl.org\/\/sites\/default\/files\/styles\/medium\/public\/persons\/CP-60028565.jpg?itok=XbjbuqK2\" alt=\"image of Anna Kreshuk\" \/>\n      \n              <h3 class=\"vf-profile__title\">\n                      <a href=\"https:\/\/www.embl.org\/people\/person\/anna-kreshuk\" class=\"vf-profile__link\">Anna Kreshuk<\/a>\n                  <\/h3>\n      \n              <p class=\"vf-profile__job-title\">\n          Interim Head of Unit for CBB\n        <\/p>\n      \n      \n      \n              \n                  <p class=\"vf-profile__email | vf-u-last-item\">\n            anna.kreshuk [at] embl.de\n          <\/p>\n              \n      \n      \n              <p class=\"vf-profile__uuid\">\n          <span>ORCID:<\/span>\n          <a class=\"vf-profile__link vf-profile__link--secondary\" href=\"https:\/\/europepmc.org\/authors\/0000-0003-1334-6388\">\n            0000-0003-1334-6388\n          <\/a>\n        <\/p>\n            <a class=\"vf-text vf-text--body-r vf-link embl-conditional-edit\" rel=\"noopener noreferrer nofollow\" href=\"\/node\/88034\/88034\" target=\"_blank\">\n        Edit\n      <\/a>\n    <\/article>\n  <\/div>\n\n  <\/div>\n<\/div>\n\n\n\n<div class=\"vf-grid | vf-grid__col-3\"><div class=\"vf-grid__col--span-2\"><!--[vf\/content]-->\n<div class=\"vf-content\">\n\n<h3 class=\"wp-block-heading\">Previous and current research<\/h3>\n\n\n\n<p>Machine learning is advancing the state of the art in image analysis more rapidly than ever before: for many problems in natural image analysis, automated methods are now approaching parity with humans. One of the major advantages of learning-based approaches is their general applicability: tailoring to a particular problem is performed by providing suitable training data, while the core of the algorithm remains unchanged. Our aim is to build on the latest advances of machine learning and computer vision to develop new methods for the analysis of microscopy images. To bring these methods to members of the life science community\u00a0without computer vision expertise, we have developed a toolkit for interactive learning and segmentation (ilastik).<\/p>\n\n\n\n<p>While the algorithms in ilastik generalise to provide user-friendly solutions to a wide array of image analysis problems, the most challenging bioimage datasets require a tailored approach. Our group is particularly interested in solving challenging segmentation problems for light or electron microscopy (LM or EM),&nbsp;in 3D and at large scale. Most recently, we have developed methods and tools to segment all cells and nuclei in a juvenile worm of the species&nbsp;<em>Platynereis dumerilii<\/em>&nbsp;(EM, Vergara&nbsp;<em>et al.<\/em>, Cell 2021), as well as in various plant organs and tissues (LM, Wolny, Cerrone&nbsp;<em>et al<\/em>.,&nbsp;<em>eLife<\/em>&nbsp;2020).<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Future projects and goals<\/h3>\n\n\n\n<p>All machine learning algorithms require user guidance at the training stage, but deep learning \u2013 the driver of the current computer vision revolution \u2013 is even more annotation hungry. This problem is especially acute in biological imaging, where annotation of ground-truth data cannot easily be outsourced to non-experts, and changes in experimental conditions can require retraining. Besides the annotation burden, the training process itself depends upon non-trivial expertise in the choice and tuning of hyperparameters. Our group is currently working on methods and training strategies that would reduce the requirements on the amount of training data, some of the early ideas can be seen in <a href=\"https:\/\/arxiv.org\/abs\/2103.14572\">https:\/\/arxiv.org\/abs\/2103.14572<\/a>, <a href=\"https:\/\/arxiv.org\/abs\/2107.02600\">https:\/\/arxiv.org\/abs\/2107.02600<\/a> and <a href=\"https:\/\/www.biorxiv.org\/content\/10.1101\/2021.11.09.467925v1\">https:\/\/www.biorxiv.org\/content\/10.1101\/2021.11.09.467925v1<\/a>. We are also interested in creative combinations of deep learning and microscopy (Wagner, Beuttenmuelluer&nbsp;<em>et al.<\/em>, bioRxiv 2020) and learning-based analysis of morphology.<\/p>\n\n<\/div>\n<\/div>\n\n\n<div><!--[vf\/content]-->\n<div class=\"vf-content\">\n\n<figure class=\"vf-figure wp-block-image size-large\"><a href=\"https:\/\/www.embl.org\/groups\/kreshuk\/wp-content\/uploads\/2020\/12\/kreshuk-group-cbb-figure1.jpg\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"709\" class=\"vf-figure__image\" src=\"https:\/\/www.embl.org\/groups\/kreshuk\/wp-content\/uploads\/2020\/12\/kreshuk-group-cbb-figure1-1024x709.jpg\" alt=\"Figure 1: Microscopy provides imagery to the algorithm that then delineates the cellular structures, making the segmentation clearer.\" class=\"wp-image-258\" srcset=\"https:\/\/www.embl.org\/groups\/kreshuk\/wp-content\/uploads\/2020\/12\/kreshuk-group-cbb-figure1-1024x709.jpg 1024w, https:\/\/www.embl.org\/groups\/kreshuk\/wp-content\/uploads\/2020\/12\/kreshuk-group-cbb-figure1-300x208.jpg 300w, https:\/\/www.embl.org\/groups\/kreshuk\/wp-content\/uploads\/2020\/12\/kreshuk-group-cbb-figure1-768x532.jpg 768w, https:\/\/www.embl.org\/groups\/kreshuk\/wp-content\/uploads\/2020\/12\/kreshuk-group-cbb-figure1.jpg 1200w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><figcaption class=\"vf-figure__caption\">Figure 1: Segmentation of cells in a plant ovule imaged with a confocal microscope. In collaboration with K.Schneitz (TUM), see also <a href=\"https:\/\/elifesciences.org\/articles\/57613\">https:\/\/elifesciences.org\/articles\/57613<\/a><\/figcaption><\/figure>\n\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"footnotes":""},"embl_taxonomy":[],"class_list":["post-15","page","type-page","status-publish","hentry"],"acf":[],"embl_taxonomy_terms":[],"_links":{"self":[{"href":"https:\/\/www.embl.org\/groups\/kreshuk\/wp-json\/wp\/v2\/pages\/15","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.embl.org\/groups\/kreshuk\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/www.embl.org\/groups\/kreshuk\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/www.embl.org\/groups\/kreshuk\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.embl.org\/groups\/kreshuk\/wp-json\/wp\/v2\/comments?post=15"}],"version-history":[{"count":14,"href":"https:\/\/www.embl.org\/groups\/kreshuk\/wp-json\/wp\/v2\/pages\/15\/revisions"}],"predecessor-version":[{"id":16539,"href":"https:\/\/www.embl.org\/groups\/kreshuk\/wp-json\/wp\/v2\/pages\/15\/revisions\/16539"}],"wp:attachment":[{"href":"https:\/\/www.embl.org\/groups\/kreshuk\/wp-json\/wp\/v2\/media?parent=15"}],"wp:term":[{"taxonomy":"embl_taxonomy","embeddable":true,"href":"https:\/\/www.embl.org\/groups\/kreshuk\/wp-json\/wp\/v2\/embl_taxonomy?post=15"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}