Today data is gathered at different scales and at different frequencies by different organisations using different types of devices much more than any other time in history.
Most of it is gathered using already outdated technologies and systems. This creates an integration challenge: digesting data at the local scale, using it to update regional and nationwide scale models, and feeding thus learned models back to the local scale to inform local inference.
If every farm has thousands of affordable sensors embedded in their environment (soil, waterways, animals) constantly “tasting” soil chemistry, flow rates, and any kind of biological activity, it will produce a massive amount of data per day that has to be stored, analysed, curated and maintained.
We could seed all the information such as crop variations, soil health, and weather patterns combined with insurance options, credit availability and market forecasts into a single database and then analyse it through AI and data analytics. Then the goal is to develop personalised services for a sector replete with challenges such as peaking yields, water stress, degrading soil and comparative lack of infrastructure.
This is supercomputing territory. Real-time advanced modelling and simulation for quick decision-making requires significant computing power. In this talk, Open Parallel's Nicolás Erdödy outlines the powerful modern technology tools and models available, but not often used, for the agriculture sector and how a system of systems similar to a digital twin shall be developed -from the edge to exascale computing, to optimise production while measuring climate change as it happens.