The annual NFO meeting took place from 4 to 5 July, hosted on the campus of the University of Patras.
An important effort was to identify the metadata standards of NFO specific data and data products and to create a service for them.
For this purpose, FRIDGE (EU NFO Federated specific data and products gateway and Virtual Laboratory), as an NFO service, is currently under construction. This will be the common gateway to all NFOs in order to discover and download NFO specific data and high level data products. It will also host simple visualization tools for multidisciplinary data.
The IT team is currently creating common dB standards for NFO specific data and data products, taking adventage of available data structures in the NFOs, e.g. the geochemical database schema used by TABOO and QuakeML, which is used in all NFOs. The IT team introduced and demonstrated a proof-of-concept of two new web services related with Vp/Vs ratio and Radon time series. Details are given in the Use Case section.
CREW (EU Testing Center for Early Warning & Source Characterization) is also defined as another EU level service. It is a testing facility, built on real-time and offline high-resolution data, aimed at fostering the development of next generation methodologies and software for real-time monitoring of faulting processes.
The TCS has also planned to propose a Transnational Access (TNA) at the NFOs. NFO-TCS offer opportunities for free-of-charge TNA for selected research groups (candidate NFOs) and companies (to develop/test new sensors) within a multidisciplinary platform.
The Data, Data Products, Software and Services (DDSS) provided by this TCS are:
- Raw (standard and specific) Data coming from dense near fault multidisciplinary networks;
- Multidisciplinary high-level Data Products.
46 DDSS (Data, Data Product, Service and Software) elements have been proposed. All elements are related with continuous acquisition and archives of long time-series of multidisciplinary data and data products. High priority group contains 30 DDSS elements: 12 are Data elements, 15 are Data Product elements and 3 are Services. 12 DDSS are Seismological Data or Data products; 5 DDSS are Geodetic Data or Data products, 4 DDSS are Geochemical Data, 1 DDSS is Strain Data, indicating the effective multidisciplinary within each NFO . Non-high group contains 20 DDSS elements from different disciplines. Almost all data and data products concern more than one NFO depending on the specific National or International RIs. Each DDSS linked to the corresponding harmonization group. Most of the elements from Seismology, Geodesy and Satellite are or will be exposed through the existing services from the authoritative WP (e.g. EIDA). Specific data, high-level data and data-products that will be produced by the NFO(s), will be exposed through the NFO-TCS data gateway.
The current DDSS priority list for the NFO-TCS is organized and contains the information about the current state of the metadata standards, data formats, service names, DDSS categories in ICS, the names of the related Harmonization Groups and the one to one link between the single DDSS element and the Institutions as data supplier.
NFO community rely on the services provided by other Thematic Core Services for the standard data (e.g. seismic and geodetic) and on the direct access to the e-infrastructures of individual NFOs via the Integrated Core Services web services for access and distribution of non-standard data (e.g. strain- and tilt-meters, geochemical and electro- magneto-telluric data). Within EPOS-IP the TCS will mainly work on the standardisation and provision of the HIGH priority DDSS.
Several data (Seismological /GNSS raw data) can be discovered and downloaded using services developed by other TCS, which are authoritative with respect to this kind of data. However, NFO-TCS manages a variety of data and data products, which need specific formats and metadata description, as well as services to be discovered.
FRIDGE - EU NFO Federated specific data and products gateway and Virtual Laboratory is the the common gateway to all NFOs able to identify the metadata standards of NFO specific data and data products and to download them. It will also host simple visualization tools for comparison of multidisciplinary data.
FRIDGE – EU NFO Federated specific data products gateway and Virtual Laboratory (VL) for display architecture
FRIDGE is planned to be:
- a virtual environment to experience simple visualization tools describing multidisciplinary data products and faults anatomy, aimed at promoting and disseminating Earth Science, including the state of scientific knowledge concerning earthquake source and tectonic processes generating catastrophic events, at different levels;
- a novel and advanced e-infrastructure for visualization, analysis, comparison and mining of multidisciplinary time series and high level data and products
- for scientific purposes;
- the specific data and data products gateway for data discovery and download common to the European NFOs;
The implementation of the FRIDGE prototype will be made in collaboration with the ICS and our final proposal is moving (in the mid-term) the VL under the ICS control in order to develop of a proper virtual and modern working environment. The IT team of NFO is currently creating common dB standards for NFO specific data and data products, taking advantage of available data structures in the NFOs, such as . the geochemical database schema used by TABOO and QuakeML, which is used in all NFOs.
Effective risk-mitigation relies on the monitoring of the seismicity to study the preparation phase and the initiation of large events for earthquake early warning (EEW) and ground shaking characterization. NFOs are natural facilities for testing and comparing procedures and codes for real-time source characterization and related hazard products, such as EEW.
CREW – EU Testing Centre for Early Warning & Source Characterization based on high-dense networks provide researchers with a testing facility to evaluate and compare, in a transparent and equal manner, new methods, data and software. This will help the community to build the next generation of real-time systems, promoting leading-edge science and technology.
CREW is a fast and accurate real-time procedures based on high-dense nets. A testing centre for:
- Comparing procedures with metrics defined by the community
- Comparing new software/methods while developing leading-edge science
- Using existing software on selected datasets
CREW operates on the ISNet (Iprinia Seismic Network). Minimum latency waveform data from the network is available on a single SeedLink server to the EEW algorithms that operate on separate Virtual Machines. Each software provides EEW alerts in standardized format (QuakeML) to a single database, which will be used for performance evaluation. Performance criteria will include location, magnitude, lead-time and ground motion estimation (with uncertainties) and make use of official authoritative bulletins. Performances are finally published on a dedicated web page.
Using the advantages of multi-disciplinary data sets, we can utilize many simple use cases.
Generally, to create these use cases, we solved complex problems that are related with:
- the scientific concept
- the query building
- the common DB structure
- the query performance
- the input parameters standard definitions and defaults
- the output (data and metadata) format standard
Time Series |
X Chemical Component in time Geodetic Displacement (all comp) Number of earthquakes per day b-value in time Vp/Vs, Vp, Vs, Attenuation in time at site Heat Flux in time |
Maps with surfaces |
X Chemical component distribution Map Layers from 3D models (Vp, Vs, Vp/Vs, Poisson ecc) Faults surfaces Geo Layers top and bottom surfaces Seismological Discontinuity surfaces Heat Flux Vertical Sections through 3D models |
Positional maps |
Interactive Maps of selected sites position Interactive Maps of selected earthquakes position Space-Time earthquakes distribution |
Histograms and Dispersion |
Selected earthquakes parameters errors distribution Selected earthquakes' depth distribution depth errors vs Origin-time errors Dromocronas (Distance vs P and S Traveltime) |
VLAB | Vp/Vs, Vp, Vs, Neqks/Day, CO2 flux, Radon Flux, Geodetic Displacements ALL at site |
In the following part, we specified one internal and one cross-disciplinary use case in detail.
Use case name/topic | Selecting and viewing earthquakes’ distribution in maps and vertical sections | Viewing and comparing Vp/Vs ratio to Radon concentration in time |
---|---|---|
Use case domain | This use case is related to one single discipline: seismology. The goal is to allow the user to select earthquakes locations from the specific NFO-DB based on some spatial-temporal-quality criteria and to plot the distribution in a geographic map and along a vertical section which extremes are interactively selected on the map. The aim is to help understanding the fault system geometry and spatial-temporal trend of the seismicity pattern. | This use case is Cross-disciplinary. The goal is the comparison of two “entities” related to seismological (Vp/Vs time series) and geochemical (Rn concentration time series) disciplines. The scientific reason for this is looking for changes (in space and time) related to the deformation process (e.g. fracturing and/or fluid migration processes) occurring during the pre- co- or post-seismic phase. |
Use case description |
As a seismologist, I want to observe the spatial and temporal distribution of earthquakes sources in the fault system volume in different 2D views. I want to be able to chose: I also want to use a color palette to identify the time range of plotted locations |
As a seismologist, I want to observe and compare the spatial and temporal behavior of P- and S-wave velocity ratio (referable to the rock volume elastic parameters) with the temporal and spatial pattern in Rn concentration in a defined rock volume (referable to on-going deformation processes), looking for statistically coherent change points, thus possibly ascribable to the same undergoing physical process such as for example the earthquake preparatory phase. |
Actors involved in the use case |
|
|
Priority | High | Medium |
Pre-conditions | User must have logged in | User must have logged in |
Flow of events – user view |
The following steps are need to answer the question: 1. seismologist user chooses a. the specific NFO [the use case is focused on just one NFO] 2. seismologist user chooses a. criteria to select earthquakes’ locations by 3. seismologist user produces a geographical map with gray shaded topography, seismic stations distribution, earthquakes’ locations plotted as dots colored as a function of time and strongest earthquakes plotted with the same color code but with different symbol and focal mechanism associated if available |
The following steps are need to answer the question: 1. seismologist user chooses 2. seismologist user chooses 3. seismologist user chooses 4. seismologist user produces a combined X (time) Y (Vp/Vs ratio) and Y’ (Rn concentration) dispersion plot with symbols and strokes. Alternative sequences and needed steps (user view) 5. seismologist user may go for step 3 (Geochemistry) before step 2 (Seismology). Step 1 is forced to be the beginning. |
System workflow - system view | 1. It activates one tasks (query_locations) connecting to the specific NFO database 2. The task connects to the database and performs a query typically combining information from locations, quality, magnitudes, and focal mechanism tables 3. The task a temporary VIEW (see mySQL) in the DB storing the results from the combined query (one line per location) 4. The data contained in the VIEW are also stored in a downloadable quakeML catalog file 5. A listening service takes data from the view and produces the interactive map 7. The interactive map allows to draw the rectangle based on which extremes a sub_task is activated selecting earthquakes from the view based on polygonal in/out function and locations are plotted in a vertical section 8. A button to reset the vertical section plot and redraw the rectangle is made available |
1. The user interface receives the input parameters for query_VpVs and query Radon 2. It activates two tasks, one per CPU, connecting to the specific NFO database 3. Each task connects to the database and searches, for the chosen recording site, records that match to the required criteria: SQL queries might be complex or simple queries, depending on the DB structure, operating on the basic tables containing a. Vp/Vs: P arrival times (and related quality parameters), S arrival times (and related quality parameters), earthquakes locations, quality of earthquakes locations, takeoff angles, back-azimuth angles b. Radon concentration: Rn counts, site correction parameters, meteorological site parameters 12. Each task restitutes a file on disk in a standardized format for time series (to be defined) 13. A listening service takes the files as soon as they are ready and produces an interactive plot where only scales, symbols and colors can be changed 14. A button to “Change criteria” sending to the selection page from step 2 is made available (see post-conditions) |
Post-conditions | The View is kept in the DB until the user session is active or until the session timeout is passed | The request is kept in memory to be reloaded as default in step 6 of the System workflow |
Extension Points | If the use case has extension points, list them here. No extension points. |
If the use case has extension points, list them here. No extension points. |
« Used » Use Cases | If the Use Case uses other Use Cases, list them here. No other use cases. |
If the Use Case uses other Use Cases, list them here. No other use cases. |
Other Requirements | This can include non-functional requirements related to the Use Case. Privacy legislation, response time of the system | This can include non-functional requirements related to the Use Case. Privacy legislation, response time of the system |