Questionnaire on Adoption of AgriSemantics for Food Security

Dear GODAN Partner,

Dr. Chris Baker from the New Bunswick University and Senior Data Scientist, and machine learning expert Dr. Brett Drury are conducting research on the adoption of AgriSemantics for food security. As part of this research they have developed a questionnaire and need the input of subjects who – like many of GODAN’s partners – work with agricultural data.

Please share the link with your networks, and with anyone you know may be interested in taking part: https://forms.gle/SsiKE2WF8si31aFK7

The results will form part of an upcoming publication on the topic. Find out more below.

Introduction

Agrisemantics allows the meaning of specific concepts in agricultural data to be clearly defined and understood by organisations who consume or produce data in the agricultural supply chain. This can be achieved with publicly available resources such as taxonomies, ontologies and thesauri which when created for agriculture are collectively referred to as Agrisemantics. These resources have been available for at least the last twenty years. The adoption of these resources by the agricultural sector has been limited and at best uneven in quality or performance. This study is designed to understand why the adoption of Agrisemantics is stymied.

Project Aims

The main aim of this project is to gain an understanding of the commonalities of adoption of Agrisemantics of all communities. From these common themes we wish to identify the drivers and roadblocks to the adoption of AgriSemantics in private industry as well as possible blindspots in the current development of publicly available resources. A secondary aim of this study is to publicise Agrisemantics to decision makers. The current COVID crisis has exposed the fragility of Agricultural Supply Chains. Agrisemantics facilities interoperability and has the potential to distribute the risk by allowing the ad-hoc integration of small suppliers into larger supply chain systems. This distribution of risk removes single points of failure in Supply Chains which are created by small numbers of large suppliers. However many decision makers are unaware of this. It is hoped that this study will raise awareness how Agrisemantics can benefit Agricultural companies.

Methodology

The research methodology uses in person interviews and a questionnaire to elicit opinions about:

  1. The adoption of AgriSemantics in private industry and the public sector
  2. The flaws in current AgriSemantic public resources.
  3. How does funding impede or enable AgriSemantic projects
  4. Is there any incentive for knowledge transfer from publicly funded projects to private industry.
  5. Is there any pressure from government agencies to force private industry to adopt standards.
  6. The measurable benefit of semantics resources and technologies to the targeted goals of researchers and business decision makers.

The interview process is a free form interview where the interviewee expresses their opinion about the core themes as well issues that they think is important.

The study is interested with interviewing people with the following profiles:

  1. Individuals who approve funding for AgriSemantic projects
  2. Individuals who have served on committees that set funding criteria for Agriculture
  3. Innovation projects
  4. Technical people who work with data integration on day to day basis
  5. Project Managers
  6. CEOs of AgriBusiness Companies
  7. NGOs who produce publicly available semantic resources
  8. Academics
  9. Technical professionals serving on committees tasked with the development of standards supporting interoperability.
  10. Agribusiness entrepreneurs
  11. Supply chain professionals

Study Outputs

The study aims to produce a high quality policy paper which seeks to influence all levels of the Agrisemantic, and Agricultural communities, and put Agrisemantics front and centre in the current debate about food security and stability of the food supply chain.

Dynamic Visualisations for Analysing Road Accidents and Traffic Conditions

REGISTER HERE

Modern data analysis often requires special techniques for handling complex data structures. Interactive graphs can provide insights into multivariate datasets by communicating the key aspects in a more intuitive way than isolated bar charts or static maps.

This webinar will present the WebGLayer tool as an enabler of dynamic visualisations that make spatio-temporal patterns, relationships and trends in the underlying data more apparent. Using case studies from Pilsen (CZ) and Flanders (BE) we’ll show how policy makers can use WebGLayer to address local problems such as traffic congestion and road accidents.

Others who might find this event useful include web developers, Open Source enthusiasts, social scientists, journalists and civil society groups interested in exploring social issues through data visualisations.

You will hear from:

Even if you can’t join live, register now and we’ll send you a link to the recorded webcast to watch at your convenience.

PoliRural Newsletter No.3

The PoliRural project has issued the third newsletter from June 2020 with the following content:

  • Population and Rural Attractiveness, a Sample System Dynamics Model by Antoni Oliva Quesada (22sistema)
  • Coronavirus vaccine being developed at MIGAL – Galilee Research Institute, Israel by Prof. Uri Marchaim (MIGAL – Galilee Research Institute)
  • PoliRural Innovation Hub by Petr Uhli (Czech Center for Science and Society)
  • Building Synergies: SHERPA – Rural Science-Society-Policy Interfaces by Roxana Vilcu (Communications Officer for SHERPA)

For full text of the newsletter please go to https://polirural.eu/newsletter/3/.

EO4AGRI online workshop “Galileo, EGNOS, and Copernicus for Agriculture”

In Europe, we have two major space based programs, Galileo and Copernicus. Combining the navigation or positioning tools of Galileo and the Earth observation data and services of Copernicus for improved food security and agriculture in general is what we address in this webinar.

Hence, it is obvious that there is a great untapped potential in combining positioning data from Galileo and EGNOS with Earth observation data for agriculture. Additionally, the Covid19 virus is unfortunately not only harming our health, it is also jeopardizing our food security. It is evident that we need to pump up our efforts to combine all the resources and knowledge we have to secure a continued good life, not only for Europeans, but for our entire planet.

Students, researchers, data analytics, participants in European, national and international projects, developers, service and application developers will learn more about Galileo, EGNOS and Earth Observation (Copernicus programme) with our speakers:

  • María-Eva Ramírez works at INECO as GNSS Expert, working as part of SpaceOpal Team at the GSC (European GNSS Service Center) for Galileo Adoption and Market Development, focused mainly on EGNSS Applications on Agriculture and Geomatics domain.
  • Sofía Cilla, works as Service Adoption Manager with the goal of promoting EGNOS usage in the different GNSS user communities (eg. aviation, rail, maritime, agriculture and geomatics) today and those that may come in the future.
  • Joaquin REYES GONZÁLEZ is Market Development Technology Officer at GSA working in the professional high-precision market on EGNOS and Galileo, focused on Agriculture topics.
  • David Kolitzus is a Senior Expert and Project Manager with an IT and a remote sensing expertise at GeoVille.
  • Bente Lilja Bye has been a member of the GEO community since 2004, engaged both as representative in the GEO plenary, in committees and contributing to the GEO Work Programme, and currently represents Norway on the GEO Programme Board. Bente runs a small research and consultancy company, BLB, focusing on transforming Earth observation data to information and knowledge for societal benefit.

Join us using the link below on Tuesday 26th May, 15:00 CEST

Live Webinar on How to Overcome Data Challenges in Transport Policy Making

Live Webinar

How to overcome data challenges in transport policy making? Lessons from the PoliVisu pilots

Register at https://zoom.us/webinar/register/WN_sbaLolXgQ6yIyU1xYb6XOQ

In this webinar we’ll discuss various data challenges experienced by cities, with a particular focus on three PoliVisu pilots: Ghent, Flanders and Issy-les-Moulineaux. Representatives of local and regional administrations will speak about the issues they faced and how they addressed them using the PoliVisu solution.

This webinar is open to everyone. That said, people who would find the event especially useful are public sector staff who either work with data directly (analysts, data officers etc.) or depend on it to make informed decisions e.g. department managers, councilors, mayors, CEOs, elected officials.

You will hear from:

Why should I attend?

  • Learn about data challenges that cities in Europe are facing
  • Discover ways to overcome them using best practice from the PoliVisu pilots
  • Find out why PoliVisu was created and how it can help your city make smarter policy decisions
  • Get a sneak peek at other PoliVisu services and future events

Even if you can’t join live, register now and we’ll send you a link to the recorded webcast to watch at your convenience.

Challenge #4: Traffic Modelling from web browser – use case of Františkovy Lázně

Team: Daniel Beran, Jan Blahník, Petr Trnka, Eva Podzimková, Zuzana Soukupová, Jan Sháněl, František Kolovský, Jan Šťastný, Jan Martolos a Karel Jedlička, supported by Polivisu and DUET H2020 projects.

The goal of our team is to demonstrate how interactive traffic modelling can improve traffic planning in any city. The demonstration consists of gathering available data and deploying a Traffic Modeller App of Františkovy Lázně (city of 5K people in Czechia).

Traffic Modeler (TraMod) is a tool for transport modeling developed in collaboration between traffic engineers, IT and GIS specialists. It can be fully implemented in a server environment with an application programming interface (API) for mobile and web applications. This creates an opportunity for a city or a region’s government representatives to test various traffic scenarios within seconds without a need to install and learn how to use desktop traffic modelling software or contacting traffic engineers every time a new roadwork appears in the region.

Our workflow for Dubrovnik Hackathon is as follows:

  • gather sufficient data about traffic network and about simulated traffic generators,
  • calculate the traffic model and import it into a spatial database, where it can be accessed by our traffic modeler.
  • this model will then be used for modeling specific traffic scenarios via traffic modeler’s Application Programming Interface (API) (see the image below)

The final step is to develop a web Graphical User Interface (GUI) similar to one already in action for the PoliVisu pilot city – Pilsen (see the image below or watch a video of TraMod in action). This application allows users to calculate various traffic scenarios (i.e. change free flow speed/capacity of road segment) in near real time via only a web browser with network connection.

Challenge #6: Integrating INSPIRE with Citizen Science and Earth observations authentication systems

Mentors: Andreas Matheus, Hector Rodriguez

The scope of the challenge is to enhance your geospatial and/or INSPIRE enabled web-based or mobile application so as to connect  to eitherCitizen Science and/or Earth Observation data. More specifically, the challenge will focus on improving accessibility to protected resources while also enabling their direct consumption and utilisation by third party applications. 

For enhancing your existing web-based or mobile application to contribute to citizen science and crowdsourcing activities within the LandSense Citizen Observatory (https://landsense.eu), you would need to implement OpenID Connect into your application that is able to interact with the LandSense Authorization Server (https://as.landsense.eu/). The LandSense Authorization Server is a core output from the project and more details can be accessed from the public deliverable “LandSense Engagement Platform – Part I”.

In order to initiate registration, you can choose to use a static registration page or leverage the RFC 7591 compliant dynamic client registration endpoint. A registered application can then use the LandSense federation including login options from Google, Facebook or eduGain (approx. 2800 University and Research organizational logins). The collection and processing of any personal data is compliant with the EU’s General Data Protection Regulation (GDPR). However, when registering the application, you can control the degree of personal information you need: A user can be simply authenticated, labelled with a cryptoname or identified with personal information. 

In order to contribute to Citizen Science with your application, you will need to interact with the LandSense platform. Additionally, you may use an OGC SensorThings API for accessing existing data or inserting new observations from the  SCENT Harmonisation Platform (http://scent-harm.iccs.gr/). The latter includes an OAuth2 Resource provider that is also integrated within the LandSense federation. 

Last but not least, you will have the opportunity to connect also to NextGEOSS Single Sign On (https://nextgeoss.eu/platform-services/user-management/) and integrate within your application protected EO resources or utilise existing applications. Additionally, details on how to interact specifically with NextGEOSS User Management system are available from here: https://github.com/ec-nextgeoss/nextgeoss-integration-guide-um

As a participant in this challenge, you should be familiar with OpenID Connect / OAuth2 principals and the developer of the application that you bring to enhance. You will learn during the hack-a-thon how to integrate a OpenID Connect library like HelloJS into your web-based application and how to setup the library to connect to a 3rd party OpenID Connect Authorization Server.

Yes, I want to register for Challenge #6!

Challenge #8: Improve interoperability between methods for sharing in-situ and citizen-sourced data

The goal of the challenge is to make available datasets provided by H2020 Citizen Observatories as well as other citizen-science projects and initiatives, through the use of SensorThings API standard and develop and test tools to provide combined visualization of data coming from different sources. This involves also sharing of environmental measurements coming from different IoT devices and in-situ monitoring sensor networks, aiming to establish combined use of data and services among different platforms towards improved environmental monitoring. 

More specifically, most of the latest projects and initiatives rely their implementation on the use of different standards like OGC Sensor Observation Service (SOS), that defines a web service interface which allows querying observations, sensor metadata, as well as representations of observed features, or more frequently used standards such as the OGC Web Feature Service. On the other options, a lot of initiatives is defining own specifications respecting needs of current projects. Integration of such data is connected with additional effort spent on development of specific translators.

Such standards (i.e. OGC SOS)  are more applicable to in-situ sensors that have a fixed location, and thus not fitting the citizen science paradigm that involves monitoring of an environmental phenomenon with different portable sensors at different locations (lack of flexibility between the location and the sensor as well as between the user and the sensor). Moreover, the implementation of requests such as the extraction of latest observations from sensors cannot be executed in an efficient or scalable way. 

Thus, the key use cases under this challenge are described as follows: 

  1. Implementation of “data translators” that will facilitate the conversion of resources exposed from OGC SOS and WFS to SensorThings API compatible schemas. In particular, the SensorThings API implementation provided by the SCENT Citizen Observatory shall be used as a reference application where the resources from other projects will be ingested. 
  2. Visualisation of resources exposed by SensorThings API through dedicated interfaces 
  3. Integration of different datasets of environmental monitoring by utilization of special “data translators”.
Yes, I want to register for Challenge #8!

Challenge #7: Establish the connection of Citizen Observatories resources with central catalogue

The goal of the challenge is to enable the integration of datasets provided from Citizen Observatories as well as from other citizen-science related projects and initiatives, with the NextGEOSS catalogue as an approach to connect citizen science into GEOSS. 

In the context of the European Union’s Horizon 2020 research and innovation programme, four sister projects on Citizen Observatories (COs) for Environmental Monitoring (GROW, GroundTruth 2.0, LandSense and SCENT) have been launched and realised. During these projects, a variety of smart and innovative applications have been implemented, enabling citizens to be engaged with environmental monitoring during their everyday activities. The use of mobile devices and low-cost portable sensors coupled with data analytics, quality assurance and modelling approaches pave the way for citizens to have an active role and voice in environmental decision-making.  The capabilities of the abovementioned tools and approaches have been demonstrated in a variety of citizen-science campaigns, being conducted across different European regions and beyond, leading to the collection of valuable environmental information. The datasets involve the following themes: 

  • Land cover/land use (point observations, maps, change detection validation, land use classification, in-situ validation, cropland field size and interpretations) 
  • Soil parameters (soil moisture, air temperature, levels of light); Planting and harvesting dates
  • Water parameters (water level, water velocity) 
  • Air quality parameters (black carbon concentration) 
  • Phenological observations (species and pheno-phase identification)
  • Disaster resilience (maps and time series data related to flood monitoring)
  • Urban green space quality (users’ perception through the provision of responses to questionnaires and images) 

The datasets are being managed by different infrastructures involving various access endpoints as well as the utilisation of OGC standards (i.e. WMS, WGS, SOS, etc), while at the same being accompanied by dedicated metadata. 

Thus in order to facilitate the metadata ingestion in the NextGEOSS catalogue, continuously running harvesters (for the Data Sources which have new Data available daily) and on-demand harvesters (for static collections of Data) shall be implemented. 

Yes, I want to register for Challenge #7!

————–

Data Cataloguing in NextGEOSS

One of the offers available in NextGEOSS is the Data Cataloguing. Catalogue data in NextGEOSS can bring some benefits such as:

  • Your Data will be EASILY DISCOVERABLE and REACHABLE to a wider audience like the entire GEO Community through the NextGEOSS catalogue;
  • Original Data Sources and Data Providers will be more visible. On the NextGEOSS catalogue there is a page listing all the Data Providers;
  • Easy access to input Data to be automatically ingested by applications due to the OpenSearch interface which allows to find the datasets catalogued and the enclosure links to where the real Data is;
  • Data catalogued in the NextGEOSS Catalogue can be used by the scientific communities in their applications;

NextGEOSS Catalogue does not store data. Only metadata and download links to where the real data is stored (enclosure links) are catalogued. The metadata ingestion in the NextGEOSS catalogue is quite flexible since it is possible to harvest metadata from different interfaces such as OpenSearch, CSW, WFS, CKAN API, REST API, OAI-PMH and others. Also different types of Data Connectors, depending on the frequency of the Data publication on the original Data Sources, can be built:

  • Continuously running harvesters (for the Data Sources which have new Data available daily)
  • On Demand Harvesters (for static collections of Data)

NextGEOSS Harvesters have also recovering mechanisms to deal with possible failures that may happen on the data catalogue or on the original data source. For example, if the original data source is down for some time, as soon as it is available again, the harvester will restart the harvesting process from the last dataset harvested and will ensure that no data is missing.

To be possible to catalogue metadata in the NextGEOSS Catalogue, there are some requirements that must be fulfilled by the data Provider:

  • A queryable API or interface to access the metadata in the original data source is required (OpenSearch, CSW, REST API, etc.);
  • The access to the original metadata records following a methodically approach is required (for example temporal queries);
  • The metadata fields in the original data source must be clear and, ideally, follow a metadata standard;
  • To have a clear understanding about how often the data is published in the original Data Source (frequency), different product types and if the data belongs to any area of study (such as Agriculture, Marine, Food Security or others);
  • Data Provider must keep the real data available for a considerable time period to ensure that the links to the original data on NextGEOSS Catalogue are not broken links;
  • To have a good availability and short response times when querying the original data source;

All of these requirements are considered during the feasibility analysis performed by the development team. If the requirements are fulfilled, it will be possible to build the data connector (harvester) which, after a set of tests in a staging instance of the catalogue, will be deployed in production.

Main obstacles to build data connectors:

  • Complex metadata and/or not following any specific standard. Difficult to map the metadata fields;
  • Metadata with many repeated fields and repeated information. Additional metadata filters. needed;
  • Limited APIs and interfaces which do not allow to perform methodical queries and organize the metadata records;
  • Metadata or interfaces that are not mature enough since they are still being updated;
  • Unstable data sources and long response time to queries;
  • Short retention period of the real data on the data provider;Data sources that do not provide links to the real Data within the metadata making it impossible to have enclosure links to the real data on NextGEOSS catalogue;