MotivationPermutation-based significance thresholds have been shown to be a robust alternative to classical Bonferroni significance thresholds in genome-wide association studies for skewed phenotype distributions. The recently published method permGWAS introduced a batch-wise approach to efficiently compute permutation-based genome-wide association studies. However, running multiple univariate tests in parallel leads to many repetitive computations and increased computational resources. More importantly, traditional permutation methods that permute only the phenotype break the underlying population structure.ResultsWe propose permGWAS2, an improved method that does not break the population structure during permutations and uses an elegant block matrix decomposition to optimize computations, thereby reducing redundancies. We show on synthetic data that this improved approach yields a lower false discovery rate for skewed phenotype distributions compared to the previous version and the commonly used Bonferroni correction. In addition, we re-analyze a dataset covering phenotypic variation in 86 traits in a population of 615 wild sunflowers (Helianthus annuus L.). This led to the identification of dozens of novel associations with putatively adaptive traits, and removed several likely false-positive associations with limited biological support.AvailabilitypermGWAS2 is open-source and publicly available on GitHub for download: https://github.com/grimmlab/permGWAS.Supplementary informationSupplementary data are available at Bioinformatics Advances online.
Mehr
Judith Bernett
,
David B Blumenthal
,
Dominik Grimm
,
Florian Haselbeck
,
Roman Joeres
,
Olga V Kalinina
,
Markus List
Machine learning methods for extracting patterns from high-dimensional data are very important in the biological sciences. However, in certain cases, real-world applications cannot confirm the reported prediction performance. One of the main reasons for this is data leakage, which can be seen as the illicit sharing of information between the training data and the test data, resulting in performance estimates that are far better than the performance observed in the intended application scenario. Data leakage can be difficult to detect in biological datasets due to their complex dependencies. With this in mind, we present seven questions that should be asked to prevent data leakage when constructing machine learning models in biological domains. We illustrate the usefulness of our questions by applying them to nontrivial examples. Our goal is to raise awareness of potential data leakage problems and to promote robust and reproducible machine learning-based research in biology.
Linear mixed models (LMMs) are a commonly used method for genome-wide association studies (GWAS) that aim to detect associations between genetic markers and phenotypic measurements in a population of individuals while accounting for population structure and cryptic relatedness. In a standard GWAS, hundreds of thousands to millions of statistical tests are performed, requiring control for multiple hypothesis testing. Typically, static corrections that penalize the number of tests performed are used to control for the family-wise error rate, which is the probability of making at least one false positive. However, it has been shown that in practice this threshold is too conservative for normally distributed phenotypes and not stringent enough for non-normally distributed phenotypes. Therefore, permutation-based LMM approaches have recently been proposed to provide a more realistic threshold that takes phenotypic distributions into account. In this work, we will discuss the advantages of permutation-based GWAS approaches, including new simulations and results from a re-analysis of all publicly available Arabidopsis thaliana phenotypes from the AraPheno database.
Current methods for end-to-end constructive neural combinatorial optimization usually train a policy using behavior cloning from expert solutions or policy gradient methods from reinforcement learning. While behavior cloning is straightforward, it requires expensive expert solutions, and policy gradient methods are often computationally demanding and complex to fine-tune. In this work, we bridge the two and simplify the training process by sampling multiple solutions for random instances using the current model in each epoch and then selecting the best solution as an expert trajectory for supervised imitation learning. To achieve progressively improving solutions with minimal sampling, we introduce a method that combines round-wise Stochastic Beam Search with an update strategy derived from a provable policy improvement. This strategy refines the policy between rounds by utilizing the advantage of the sampled sequences with almost no computational overhead. We evaluate our approach on the Traveling Salesman Problem and the Capacitated Vehicle Routing Problem. The models trained with our method achieve comparable performance and generalization to those trained with expert data. Additionally, we apply our method to the Job Shop Scheduling Problem using a transformer-based architecture and outperform existing state-of-the-art methods by a wide margin.
Mehr
Josef Eiglsperger
,
Florian Haselbeck
,
Viola Stiele
,
Claudia Guadarrama Serrano
,
Kelly Lim-Trinh
,
Klaus Menrad
,
Thomas Hannus
,
Dominik Grimm
Accurately forecasting demand is a potential competitive advantage, especially when dealing with perishable products. The multi-billion dollar horticultural industry is highly affected by perishability, but has received limited attention in forecasting research. In this paper, we analyze the applicability of general compared to dataset-specific predictors, as well as the influence of external information and online model update schemes. We employ a heterogeneous set of horticultural data, three classical, and twelve machine learning-based forecasting approaches. Our results show a superiority of multivariate machine learning methods, in particular the ensemble learner XGBoost. These advantages highlight the importance of external factors, with the feature set containing statistical, calendrical, and weather-related features leading to the most robust performance. We further observe that a general model is unable to capture the heterogeneity of the data and is outperformed by dataset-specific predictors. Moreover, frequent model updates have a negligible impact on forecasting quality, allowing long-term forecasting without significant performance degradation.
Mehr
Nikita Genze
,
Wouter K Vahl
,
Jennifer Groth
,
Maximilian Wirth
,
Michael Grieb
,
Dominik Grimm
Sustainable weed management strategies are critical to feeding the world’s population while preserving ecosystems and biodiversity. Therefore, site-specific weed control strategies based on automation are needed to reduce the additional time and effort required for weeding. Machine vision-based methods appear to be a promising approach for weed detection, but require high quality data on the species in a specific agricultural area. Here we present a dataset, the Moving Fields Weed Dataset (MFWD), which captures the growth of 28 weed species commonly found in sorghum and maize fields in Germany. A total of 94,321 images were acquired in a fully automated, high-throughput phenotyping facility to track over 5,000 individual plants at high spatial and temporal resolution. A rich set of manually curated ground truth information is also provided, which can be used not only for plant species classification, object detection and instance segmentation tasks, but also for multiple object tracking.
Sorghum wird in Bayern als Energiepflanze vor allem für die Biogasproduktion angebaut. Die hohe Biomasseleistung und die große Sortenvarietät in Verbindung mit seiner Trockenheitstoleranz und Nährstoffeffizienz machen Sorghum zu einer vielversprechenden Rohstoffpflanze. Neuartige Technologien, verknüpft mit intelligenter Software, eröffnen große Potentiale im Bereich der Effizienzsteigerung in der Landwirtschaft. Mit Hilfe von modernsten Verfahren des maschinellen Lernens, wie künstliche neuronale Netze oder Deep Learning, können drohnenbasierte Bildaufnahmen der Anbauflächen analysiert und Beikraut erkannt werden.
Mehr
Raymond Ajekwe
,
Michael Grieb
,
Nikita Genze
,
Dominik Grimm
Neuartige Technologien, verknüpft mit intelligenter Bildauswertung, eröffnen große Poten- ziale im Bereich der Effizienzsteigerung in der Landwirtschaft. Mit Hilfe von modernsten Ver- fahren des maschinellen Lernens (z. B. künstliche neuronale Netze) sollen drohnenbasierte Bildaufnahmen von Sorghum-Anbauflächen automatisch analysiert und Beikraut erkannt werden. Sorghum wird in Bayern als Energiepflanze vor allem für die Biogasproduktion an- gebaut. Die hohe Biomasseleistung und die große Sortenvarietät in Verbindung mit seiner Trockenheitstoleranz und Nährstoffeffizienz machen Sorghum zu einer vielversprechenden Rohstoffpflanze.
Genome-wide association studies (GWAS) are a powerful tool to elucidate the genotype–phenotype map. Although GWAS are usually used to assess simple univariate associations between genetic markers and traits of interest, it is also possible to infer the underlying genetic architecture and to predict gene regulatory interactions. In this chapter, we describe the latest methods and tools to perform GWAS by calculating permutation-based significance thresholds. For this purpose, we first provide guidelines on univariate GWAS analyses that are extended in the second part of this chapter to more complex models that enable the inference of gene regulatory networks and how these networks vary.
In this chapter, we introduce the concept of RNA-Seq analyses. First, we start to provide an overview of a typical RNA-Seq experiment that includes extraction of sample RNA, enrichment, and cDNA library preparation. Next, we review tools for quality control and data pre-processing followed by a standard workflow to perform RNA-Seq analyses. For this purpose, we discuss two common RNA-Seq strategies, that is a reference-based alignment and a de novo assembly approach. We learn how to do basic downstream analyses of RNA-Seq data, including quantification of expressed genes, differential gene expression (DE) between different groups as well as functional gene analysis. Eventually, we provide a best-practice example for a reference-based RNA-Seq analysis from beginning to end, including all necessary tools and steps on GitHub: https://github.com/grimmlab/BookChapter-RNA-Seq-Analyses.
Enhancing Weed Detection with Fine-Tuned Foundation Models for Robust and Generalizable Precision Agriculture (2025) Weihenstephan Bioinformatics Symposium 2025 .
Maura John
,
Dominik Grimm
permGWAS2: Population-aware Permutation-based Significance Thresholds for Genome-wide Association Studies (2024) German Bioinformatics Conference (GCB), Bielefeld, 30 September - 2 October 2024 .
Predicting Protein Thermostability through Deep Learning Leveraging 3D Structural Information (2024) Biological Materials Science - A workshop on biogenic, bioinspired, biomimetic and biohybrid materials for innovative optical, photonics and optoelectronics applications 2024 .
In protein engineering, improving thermostability is essential for many industrial and pharmaceutical applications. However, the experimental process of identifying stabilizing mutations is time-consuming due to the enormous search space. With the increasing availability of protein structural and thermostability data, computational approaches using deep learning to identify thermostable candidates are gaining popularity. In this work, we present and benchmark a novel graph neural network, ProtGCN, that incorporates geometric and energetic details of proteins to predict changes in Gibbs free energy (ΔG), a key indicator of thermostability, upon single point mutations. Unlike conventional methods that rely on sequence or structural features, our model uses protein graphs with rich node features, carefully preprocessed from a comprehensive dataset of approximately 4149 mutated sequences across 117 protein families. In addition, ProtGCN is enhanced by incorporating embeddings from the Evolutionary Scale Modeling (ESM) protein language model into the protein graphs. This integration allows ProtGCN (ESM) to outperform comparison models, achieving competitive performance with XGBoost and a protein language model-based multi-layer perceptron on all evaluation metrics, and outperforming all models on further analyses. A strength of ProtGCN (ESM) is its ability to correctly identify and predict stabilizing and destabilizing mutations with extreme effects, which are typically underrepresented in thermostability datasets. These results suggest a promising direction for future computational protein engineering research.
Florian Haselbeck
,
Maura John
,
Yuqi Zhang
,
Jonathan Pirnay
,
Juan Pablo Fuenzalida-Werner
,
Ruben Costa
,
Dominik Grimm
Superior Protein Thermophilicity Prediction With Protein Language Model Embeddings (2024) Biological Materials Science - A workshop on biogenic, bioinspired, biomimetic and biohybrid materials for innovative optical, photonics and optoelectronics applications 2024 .
Protein thermostability is an essential property for many biotechnological fields, such as enzyme engineering and protein-hybrid optoelectronics. In this context, machine learning-based in silico predictions have the potential to reduce costs and development time by identifying the most promising candidates for subsequent experiments. The development of such prediction models is enabled by ever-growing protein databases and information on protein stability at different temperatures. In this study, we leverage protein language model embeddings for thermophilicity prediction with ProLaTherm, a Protein Language model-based Thermophilicity predictor. We assess ProLaTherm against several feature-, sequence-, and literature-based comparison partners on a new benchmark dataset derived from a significant update of published data. ProLaTherm outperforms all comparison partners both in a nested cross-validation setup and on protein sequences from species not seen during training with respect to multiple evaluation metrics. In terms of Matthew's correlation coefficient, ProLaTherm surpasses the second-best competitor by 18.1% in the nested cross-validation setup. Using proteins from species that do not overlap with species from the training data, ProLaTherm outperforms all competitors by at least 9.7%. On this data, it misclassified only one non-thermophilic protein as thermophilic. Furthermore, it correctly identified 97.4% of all thermophilic proteins in our test set with an optimal growth temperature above 70°C.
Jonathan Pirnay
,
Quirin Göttl
,
Jakob Burger
,
Dominik Grimm
AlphaZero-type algorithms may stop improving on single-player tasks in case the value network guiding the tree search is unable to approximate the outcome of an episode sufficiently well. One technique to address this problem is transform- ing the single-player task through self-competition. The main idea is to com- pute a scalar baseline from the agent’s historical performances and to reshape an episode’s reward into a binary output, indicating whether the baseline has been exceeded or not. However, this baseline only carries limited information for the agent about strategies how to improve. We leverage the idea of self-competition and directly incorporate a historical policy into the planning process instead of its scalar performance. Based on the recently introduced Gumbel AlphaZero (GAZ), we propose our algorithm GAZ ‘Play-to-Plan’ (GAZ PTP), in which the agent learns to find strong trajectories by planning against possible strategies of its past self. We show the effectiveness of our approach in two well-known combina- torial optimization problems, the Traveling Salesman Problem and the Job-Shop Scheduling Problem. With only half of the simulation budget for search, GAZ PTP consistently outperforms all selected single-player variants of GAZ.
Mehr
Jan D Hüwel
,
Florian Haselbeck
,
Dominik Grimm
,
Christian Beecks
One of the major challenges in time series analysis are changing data distributions, especially when processing data streams. To ensure an up-to-date model delivering useful predictions at all times, model reconfigurations are required to adapt to such evolving streams. For Gaussian processes, this might require the adaptation of the internal kernel expression. In this paper, we present dynamically self-adjusting Gaussian processes by introducing Event Triggered Kernel Adjustments in Gaussian process modelling (ETKA), a novel data stream modelling algorithm that can handle evolving and changing data distributions. To this end, we enhance the recently introduced Adjusting Kernel Search with a novel online change point detection method. Our experiments on simulated data with varying change point patterns suggest a broad applicability of ETKA. On real-world data, ETKA outperforms comparison partners that differ regarding the model adjustment and its refitting trigger in nine respective ten out of 14 cases. These results confirm ETKA's ability to enable a more accurate and, in some settings, also more efficient data stream processing via Gaussian processes.Code availability: https://github.com/JanHuewel/ETKA
The present work uses reinforcement learning (RL) for automated flowsheet synthesis. The task of synthesizing a flowsheet is reformulated into a two-player game, in which an agent learns by self-play without prior knowledge. The hierarchical RL scheme developed in our previous work (Göttl et al., 2021b) is coupled with an improved training process. The training process is analyzed in detail using the synthesis of ethyl tert-butyl ether (ETBE) as an example. This analysis uncovers how the agent’s evolution is driven by the two-player setup.
Timeseriesforecastingisagrowingdomainwithdiverseapplications. However, changes of the system behavior over time due to internal or external influences are challenging. Therefore, predictions of a previously learned forecast- ing model might not be useful anymore. In this paper, we present EVent-triggered Augmented Refitting of Gaussian Process Regression for Seasonal Data (EVARS- GPR), a novel online algorithm that is able to handle sudden shifts in the target variable scale of seasonal data. For this purpose, EVARS-GPR combines online change point detection with a refitting of the prediction model using data aug- mentation for samples prior to a change point. Our experiments on simulated data show that EVARS-GPR is applicable for a wide range of output scale changes. EVARS-GPR has on average a 20.8% lower RMSE on different real-world datasets compared to methods with a similar computational resource consumption. Fur- thermore, we show that our algorithm leads to a six-fold reduction of the averaged runtime in relation to all comparison partners with a periodical refitting strategy. In summary, we present a computationally efficient online forecasting algorithm for seasonal time series with changes of the target variable scale and demonstrate its functionality on simulated as well as real-world data. All code is publicly available on GitHub: https://github.com/grimmlab/evars-gpr.
Automated flowsheet synthesis with deep reinforcement learning (2024) Invited Talk, European Conference on Machine Learning (ECML), Machine Learning for Chemistry and Chemical Engineering (ML4CCE), Vilnius, Lithuania, 9th of September .
Dominik Grimm
KI in der Landwirtschaft (2024) Sustainability Dialog at TUMCS .
Jan D Hüwel
,
Florian Haselbeck
,
Dominik Grimm
,
Christian Beecks
One of the major challenges in time series analysis are changing data distributions, especially when processing data streams. To ensure an up-to-date model delivering useful predictions at all times, model reconfigurations are required to adapt to such evolving streams. For Gaussian processes, this might require the adaptation of the internal kernel expression. In this paper, we present dynamically self-adjusting Gaussian processes by introducing Event Triggered Kernel Adjustments in Gaussian process modelling (ETKA), a novel data stream modelling algorithm that can handle evolving and changing data distributions. To this end, we enhance the recently introduced Adjusting Kernel Search with a novel online change point detection method. Our experiments on simulated data with varying change point patterns suggest a broad applicability of ETKA. On real-world data, ETKA outperforms comparison partners that differ regarding the model adjustment and its refitting trigger in nine respective ten out of 14 cases. These results confirm ETKA's ability to enable a more accurate and, in some settings, also more efficient data stream processing via Gaussian processes.Code availability: https://github.com/JanHuewel/ETKA
Mehr
Dominik Grimm
Towards a better understanding of the genetic architecture of complex traits (2022) Keynote @TüBMI 2022, Tübinger Bioinformatics and Medical Informatics Days 2022 .
Dominik Grimm
,
Quirin Göttl
,
Jakob Burger
Reinforcement Learning für die automatisierte Fließbildsynthese (2021) AI4Life, KI Symposium .
Florian Haselbeck
,
Dominik Grimm
EVARS-GPR: EVent-triggered Augmented Refitting of Gaussian Process Regression for Seasonal Data (2021) 44th German Conference on Artificial Intelligence (Virtual Conference) .
DOI: 10.1007/978-3-030-87626-5_11
Time series forecasting is a growing domain with diverse applications. However, changes of the system behavior over time due to internal or external influences are challenging. Therefore, predictions of a previously learned forecasting model might not be useful anymore. In this paper, we present EVent-triggered Augmented Refitting of Gaussian Process Regression for Seasonal Data (EVARS-GPR), a novel online algorithm that is able to handle sudden shifts in the target variable scale of seasonal data. For this purpose, EVARS-GPR combines online change point detection with a refitting of the prediction model using data augmentation for samples prior to a change point. Our experiments on simulated data show that EVARS-GPR is applicable for a wide range of output scale changes. EVARS-GPR has on average a 20.8% lower RMSE on different real-world datasets compared to methods with a similar computational resource consumption. Furthermore, we show that our algorithm leads to a six-fold reduction of the averaged runtime in relation to all comparison partners with a periodical refitting strategy. In summary, we present a computationally efficient online forecasting algorithm for seasonal time series with changes of the target variable scale and demonstrate its functionality on simulated as well as real-world data. All code is publicly available on GitHub: https://github.com/grimmlab/evars-gpr.
Klaus Menrad
,
Florian Haselbeck
,
Daniel Berki-Kiss
,
Thomas Decker
,
Dominik Grimm
,
Thomas Hannus
,
Kai Sparke
,
M. Lehberger
,
M. Drechsler
,
Andreas Holzapfel
,
S. Schröder
,
Gerald Neu
,
F. Bertlich
Beikräuter sind Pflanzen, die auf landwirtschaftlichen Flächen wachsen und durch Ressourcenkonkurrenz den Ertrag und die Qualität der Ernte verringern. Zur Bekämpfung der Beikräuter stehen verschiedene Strategien und Maßnahmen zur Verfügung. Am häufigsten werden dabei Herbizide eingesetzt, obwohl Risiken für die menschliche Gesundheit und die Umwelt bekannt sind. Zur Einsparung der Pflanzenschutzmittel wird daher ein standortspezifisches Beikrautmanagement empfohlen. Dafür ist eine präzise Erkennung und Klassifizierung von Beikräutern erforderlich. Unbemannte Luftfahrzeuge (Unmanned Aereal Vehicle = UAV) sind nützliche Werkzeuge zur Erfassung hochauflösender Felddaten für die Bildsegmentierung mit Deep-Learning-Methoden. Im Gegensatz zu anderen Bildaufnahmesystemen wie konventionellen Kameras, Satelliten und Flugzeugen sind UAVs kostengünstig, schnell und erfassen aufgrund geringerer Flughöhe hochwertige Bilder ohne eine Störung durch Wolken.In diesem Projekt wurde die Eignung von UAVs für die Erzeugung von Felddaten für die Früherkennung von Beikraut in Sorghum untersucht. Unterschiedliche Aufnahmebedingungen und UAV-Einstellungen wurden evaluiert, um die Bildqualität zu optimieren. Diese Bilder wurden anschließend manuell annotiert, was zu einem großen, vielfältigen und hochwertigen UAV-Beikrautdatensatz für Sorghum führte. Mithilfe dieses Datensatzes wurden mehrere moderne Deep-Learning-Modelle trainiert und auf ihre Generalisierungsfähigkeit unter Berücksichtigung von Faktoren wie Bewegungsunschärfe, direkter Sonneneinstrahlung und unterschiedlichen Wachstumsstadien, getestet.Die entwickelten Modelle waren in der Lage, dikotyle Beikräuter in einer Vielzahl von Drohnenbildern und Wachstumsstadien von Sorghum mit hoher Genauigkeit zu erkennen, hatten jedoch Schwierigkeiten mit monokotylen Beigräsern. Um Bewegungsunschärfe zu vermindern, welche als eine der Hauptfaktoren für Qualitätsverluste gilt, wurde ein zweistufiges Deep-Learning Verfahren entwickelt. Dieses verbessert vor der Segmentierung die Bildqualität. Darüber hinaus erfolgte die Entwicklung eines generativen Modells, um künstliche Bilder von Beikräutern und die dazugehörigen Segmentierungsmasken zu erzeugen. Diese generierten Bildpaare können dann verwendet werden, um neue KI-Modelle für die Beikrautsegmentierung zu trainieren, wenn die Trainingsdaten spärlich sind.Schließlich wurde ein Anpassungstest für Mais durchgeführt, um die Übertragbarkeit des Modells auf andere Kulturarten zu überprüfen. Es zeigte gute Ergebnisse in frühen Wachstumsstadien (BBCH 13 und 15), jedoch war die Unterscheidung zwischen Mais und Beikräutern in späteren Wachstumsstadien eingeschränkt. Dies weist auf den weiteren Forschungsbedarf hin.
Mehr
Fabian Schäfer
,
Manuel Walther
,
Dominik Grimm
,
Alexander Hübner
This paper develops a multi-objective decision support model for solving the patient bed assignment problem. Assigning inpatients to hospital beds impacts patient satisfaction and the workload of nurses and doctors. The assignment is subject to unknown patient arrivals and lengths of stay, in particular for emergency patients. Hospitals therefore need to deal with uncertainty on actual bed requirements and potential shortage situations as bed capacities are limited. This paper contributes by improving the anticipation of emergency patients using machine learning (ML) approaches, incorporating weather data, time and dates, important local and regional events, as well as current and historical occupancy levels. Drawing on real-life data from a large case hospital, we were able to improve forecasting accuracy for emergency inpatient arrivals. We achieved an up to 17% better root mean square error when using ML methods compared to a baseline approach relying on averages for historical arrival rates. Second, we develop a new hyper-heuristic for solving real-life problem instances based on the pilot method and a specialized greedy look-ahead heuristic. When applying the hyper-heuristic in test sets we were able to increase the objective function by up to 3% in a single problem instance and up to 4% in a time series analysis compared to current approaches in literature. We achieved an improvement of up to 2.2% compared to a baseline approach from literature by combining the emergency patient admission forecasting and the hyper-heuristic on real-life situations.
Time series forecasting is a growing domain with diverse applications. However, changes of the system behavior over time due to internal or external influences are challenging. Therefore, predictions of a previously learned fore-casting model might not be useful anymore. In this paper, we present EVent-triggered Augmented Refitting of Gaussian Process Regression for Seasonal Data (EVARS-GPR), a novel online algorithm that is able to handle sudden shifts in the target variable scale of seasonal data. For this purpose, EVARS-GPR com-bines online change point detection with a refitting of the prediction model using data augmentation for samples prior to a change point. Our experiments on sim-ulated data show that EVARS-GPR is applicable for a wide range of output scale changes. EVARS-GPR has on average a 20.8 % lower RMSE on different real-world datasets compared to methods with a similar computational resource con-sumption. Furthermore, we show that our algorithm leads to a six-fold reduction of the averaged runtime in relation to all comparison partners with a periodical refitting strategy. In summary, we present a computationally efficient online fore-casting algorithm for seasonal time series with changes of the target variable scale and demonstrate its functionality on simulated as well as real-world data. All code is publicly available on GitHub: this https URL
Automated flowsheet synthesis is an important field in computer-aided process engineering. The present work demonstrates how reinforcement learning can be used for automated flowsheet synthesis without any heuristics of prior knowledge of conceptual design. The environment consists of a steady-state flowsheet simulator that contains all physical knowledge. An agent is trained to take discrete actions and sequentially built up flowsheets that solve a given process problem. A novel method named SynGameZero is developed to ensure good exploration schemes in the complex problem. Therein, flowsheet synthesis is modelled as a game of two competing players. The agent plays this game against itself during training and consists of an artificial neural network and a tree search for forward planning. The method is applied successfully to a reaction-distillation process in a quaternary system.
Methoden aus dem Vorgängerprojekt zur Fließbildsynthese sollen verbessert werden und es soll ein generalisierter Ansatz entwickelt werden, in dem die Aufgabenstellung einen Zulaufstrom und dessen …
Dieses Projekt erforscht das Potenzial generativer Künstlicher Intelligenz (KI) zur Entdeckung und Optimierung von Enzymen für die zellfreie Bioverfahrenstechnik.
…
Wir verwenden Cookies. Einige sind notwendig für die Funktion der Webseite, andere helfen uns, die Webseite zu verbessern. Um unseren eigenen Ansprüchen beim Datenschutz gerecht zu werden, erfassen wir lediglich anonymisierte Nutzerdaten mit „Matomo“. Um unser Internetangebot für Sie ansprechender zu gestalten, binden wir außerdem externe Inhalte unserer Social-Media-Kanäle ein.