Publications
A comprehensive collection of peer-reviewed research in machine learning, computer vision, and plant science
StomaGAN: Improving image-based analysis of stomata through Generative Adversarial Networks
Journal ArticleAbstract
Stomata regulate gas exchange between plants and the atmosphere, but analysing their morphology is challenging due to anatomical variability and artifacts during image acquisition. Deep learning (DL) can address these challenges but often requires large and diverse datasets, which are costly and error prone to produce. Generative adversarial networks (GANs) offer a solution by generating artificial data via unsupervised learning. However, GANs often suffer from problems including mode collapse, vanishing gradients, and network failure, particularly with small datasets. Here, we present StomaGAN, a deep convolutional GAN (DCGAN) with tailored modifications to address common GAN issues. We collected 559 stomatal impressions of field, or faba bean (Vicia faba) consisting of ~3,000 stoma, 80% of which were used to train StomaGAN. Evaluation metrics, including generator and discriminator loss progression and a mean Fréchet Inception Distance (FID) score of 61.4 across eight experimental runs confirms successful training. To validate StomaGAN, we generated artificial images to train a deep convolutional neural network (DCNN) based on the DeepLabV3 framework for stomata detection from real, unseen images. The DCNN achieved a mean Interception over Union (IoU) of 0.95 on artificial training images and a 0.91 on real, unseen, images across varying magnifications. Our results demonstrate that StomaGAN effectively generates high-quality synthetic datasets, enabling reliable stomatal detection and enhancing phenotypic analysis. This approach reduces the need for extensive manual data collection and simplifies complex morphological assessments.
Application of deep learning for the analysis of stomata: a review of current methods and future directions
Review ArticleAbstract
Plant physiology and metabolism rely on the function of stomata, structures on the surface of above-ground organs that facilitate the exchange of gases with the atmosphere. The morphology of the guard cells and corresponding pore that make up the stomata, as well as the density (number per unit area), are critical in determining overall gas exchange capacity. These characteristics can be quantified visually from images captured using microscopy, traditionally relying on time-consuming manual analysis. However, deep learning (DL) models provide a promising route to increase the throughput and accuracy of plant phenotyping tasks, including stomatal analysis. Here we review the published literature on the application of DL for stomatal analysis. We discuss the variation in pipelines used, from data acquisition, pre-processing, DL architecture, and output evaluation to post-processing. We introduce the most common network structures, the plant species that have been studied, and the measurements that have been performed. Through this review, we hope to promote the use of DL methods for plant phenotyping tasks and highlight future requirements to optimize uptake, predominantly focusing on the sharing of datasets and generalization of models as well as the caveats associated with utilizing image data to infer physiological function.
The effect of canopy architecture on the patterning of "windflecks" within a wheat canopy
Research ArticleAbstract
Under field conditions, plants are subject to wind-induced movement which creates fluctuations of light intensity and spectral quality reaching the leaves, defined here as windflecks. Within this study, irradiance within two contrasting wheat (Triticum aestivum) canopies during full sun conditions was measured using a spectroradiometer to determine the frequency, duration and magnitude of low- to high-light events plus the spectral composition during wind-induced movement. Similarly, a static canopy was modelled using three-dimensional reconstruction and ray tracing to determine fleck characteristics without the presence of wind. Corresponding architectural traits were measured manually and in silico including plant height, leaf area and angle plus biomechanical properties. Light intensity can differ up to 40% during a windfleck, with changes occurring on a sub-second scale compared to ~5 min in canopies not subject to wind. Features such as a shorter height, more erect leaf stature and having an open structure led to an increased frequency and reduced time interval of light flecks in the CMH79A canopy compared to Paragon. This finding illustrates the potential for architectural traits to be selected to improve the canopy light environment and provides the foundation to further explore the links between plant form and function in crop canopies.
A deep learning method for fully automatic stomatal morphometry and maximal conductance estimation
Research ArticleAbstract
Stomata are integral to plant performance, enabling the exchange of gases between the atmosphere and the plant. The anatomy of stomata influences conductance properties with the maximal conductance rate, gsmax, calculated from density and size. However, current calculations of stomatal dimensions are performed manually, which are time-consuming and error prone. Here, we show how automated morphometry from leaf impressions can predict a functional property: the anatomical gsmax. A deep learning network was derived to preserve stomatal morphometry via semantic segmentation. This forms part of an automated pipeline to measure stomata traits for the estimation of anatomical gsmax. The proposed pipeline achieves accuracy of 100% for the distinction (wheat vs. poplar) and detection of stomata in both datasets. The automated deep learning-based method gave estimates for gsmax within 3.8 and 1.9% of those values manually calculated from an expert for a wheat and poplar dataset, respectively. Semantic segmentation provides a rapid and repeatable method for the estimation of anatomical gsmax from microscopic images of leaf impressions. This advanced method provides a step toward reducing the bottleneck associated with plant phenotyping approaches and will provide a rapid method to assess gas fluxes in plants based on stomata morphometry.
Recovering wind-induced plant motion in dense field environments via deep learning and multiple object tracking
Research ArticleAbstract
Understanding the relationships between local environmental conditions and plant structure and function is critical for both fundamental science and for improving the performance of crops in field settings. Wind-induced plant motion is important in most agricultural systems, yet the complexity of the field environment means that it remained understudied. Despite the ready availability of image sequences showing plant motion, the cultivation of crop plants in dense field stands makes it difficult to detect features and characterize their general movement traits. Here, we present a robust method for characterizing motion in field-grown wheat plants (Triticum aestivum) from time-ordered sequences of red, green, and blue images. A series of crops and augmentations was applied to a dataset of 290 collected and annotated images of ear tips to increase variation and resolution when training a convolutional neural network. This approach enables wheat ears to be detected in the field without the need for camera calibration or a fixed imaging position. Videos of wheat plants moving in the wind were also collected and split into their component frames. Ear tips were detected using the trained network, then tracked between frames using a probabilistic tracking algorithm to approximate movement. These data can be used to characterize key movement traits, such as periodicity, and obtain more detailed static plant properties to assess plant structure and function in the field. Automated data extraction may be possible for informing lodging models, breeding programs, and linking movement properties to canopy light distributions and dynamic light fluctuation.
A canopy conundrum: can wind-induced movement help to increase crop productivity by relieving photosynthetic limitations?
Research ArticleAbstract
Wind-induced movement is a ubiquitous occurrence for all plants grown in natural or agricultural settings, and in the context of high, damaging wind speeds it has been well studied. However, the impact of lower wind speeds (which do not cause any damage) on mode of movement, light transmission, and photosynthetic properties has, surprisingly, not been fully explored. This impact is likely to be influenced by biomechanical properties and architectural features of the plant and canopy. A limited number of eco-physiological studies have indicated that movement in wind has the potential to alter light distribution within canopies, improving canopy productivity by relieving photosynthetic limitations. Given the current interest in canopy photosynthesis, it is timely to consider such movement in terms of crop yield progress. This opinion article sets out the background to wind-induced crop movement and argues that plant biomechanical properties may have a role in the optimization of whole-canopy photosynthesis via established physiological processes. We discuss how this could be achieved using canopy models.
Active vision and surface reconstruction for 3D plant shoot modelling
Journal ArticleAbstract
Plant phenotyping is the quantitative description of a plant's physiological, biochemical, and anatomical status which can be used in trait selection and helps to provide mechanisms to link underlying genetics with yield. Here, an active vision- based pipeline is presented which aims to contribute to reducing the bottleneck associated with phenotyping of architectural traits. The pipeline provides a fully automated response to photometric data acquisition and the recovery of three-dimensional (3D) models of plants without the dependency of botanical expertise, whilst ensuring a non-intrusive and non-destructive approach. Access to complete and accurate 3D models of plants supports computation of a wide variety of structural measurements. An Active Vision Cell (AVC) consisting of a camera-mounted robot arm plus combined software interface and a novel surface reconstruction algorithm is proposed. This pipeline provides a robust, flexible, and accurate method for automating the 3D reconstruction of plants. The reconstruction algorithm can reduce noise and provides a promising and extendable framework for high throughput phenotyping, improving current state-of-the-art methods. Furthermore, the pipeline can be applied to any plant species or form due to the application of an active vision framework combined with the automatic selection of key parameters for surface reconstruction.
Plant phenotyping: an active vision cell for three-dimensional plant shoot reconstruction
Research ArticleAbstract
Three-dimensional (3D) computer-generated models of plants are urgently needed to support both phenotyping and simulation-based studies such as photosynthesis modeling. However, the construction of accurate 3D plant models is challenging, as plants are complex objects with an intricate leaf structure, often consisting of thin and highly reflective surfaces that vary in shape and size, forming dense, complex, crowded scenes. We address these issues within an image-based method by taking an active vision approach, one that investigates the scene to intelligently capture images, to image acquisition. Rather than use the same camera positions for all plants, our technique is to acquire the images needed to reconstruct the target plant, tuning camera placement to match the plant's individual structure. Our method also combines volumetric- and surface-based reconstruction methods and determines the necessary images based on the analysis of voxel clusters. We describe a fully automatic plant modeling/phenotyping cell (or module) comprising a six-axis robot and a high-precision turntable. By using a standard color camera, we overcome the difficulties associated with laser-based plant reconstruction methods. The 3D models produced are compared with those obtained from fixed cameras and evaluated by comparison with data obtained by x-ray microcomputed tomography across different plant structures. Our results show that our method is successful in improving the accuracy and quality of data obtained from a variety of plant types.
Approaches to three-dimensional reconstruction of plant shoot topology and geometry
Review ArticleAbstract
There are currently 805 million people classified as chronically undernourished, and yet the World's population is still increasing. At the same time, global warming is causing more frequent and severe flooding and drought, thus destroying crops and reducing the amount of land available for agriculture. Recent studies show that without crop climate adaption, crop productivity will deteriorate. With access to 3D models of real plants it is possible to acquire detailed morphological and gross developmental data that can be used to study their ecophysiology, leading to an increase in crop yield and stability across hostile and changing environments. Here we review approaches to the reconstruction of 3D models of plant shoots from image data, consider current applications in plant and crop science, and identify remaining challenges. We conclude that although phenotyping is receiving an increasing amount of attention – particularly from computer vision researchers – and numerous vision approaches have been proposed, it still remains a highly interactive process. An automated system capable of producing 3D models of plants would significantly aid phenotyping practice, increasing accuracy and repeatability of measurements.
Three-dimensional reconstruction of plant shoots from multiple images using an active vision system
Conference PaperAbstract
The reconstruction of 3D models of plant shoots is a challenging problem central to the emerging discipline of plant phenomics – the quantitative measurement of plant structure and function. Current approaches are, however, often limited by the use of static cameras. We propose an automated active phenotyping cell to reconstruct plant shoots from multiple images using a turntable capable of rotating 360 degrees and camera mounted robot arm. To overcome the problem of static camera positions we develop an algorithm capable of analysing the environment and determining viewpoints from which to capture initial images suitable for use by a structure from motion technique.
Scheduling english football fixtures over the holiday period using hyper-heuristics
Conference PaperAbstract
One of the annual issues that has to be addressed in English football is producing a fixture schedule for the holiday periods that reduces the travel distance for the fans and players. This problem can be seen as a minimisation problem which must abide to the constraints set by the Football Association. In this study, the performance of selection hyper-heuristics is investigated as a solution methodology. Hyper-heuristics aim to automate the process of selecting and combining simpler heuristics to solve computational search problems. A selection hyper-heuristic stores a single candidate solution in memory and iteratively applies selected low level heuristics to improve it. The results show that the learning hyper-heuristics outperform some previously proposed approaches and solutions published by the Football Association.
Research Impact & Metrics
My research spans machine learning, computer vision, and plant science, with particular focus on developing automated phenotyping systems and advancing deep learning applications in agriculture. Work has been published in leading journals and has contributed to advancing our understanding of plant-environment interactions.