2023, Articolo in rivista, ENG
Asprino, Luigi; Daga, Enrico; Gangemi, Aldo; Mulholland, Paul
Data integration is the dominant use case for RDF Knowledge Graphs. However, Web resources come in formats with weak semantics (for example, CSV and JSON), or formats specific to a given application (for example, BibTex, HTML, and Markdown). To solve this problem, Knowledge Graph Construction (KGC) is gaining momentum due to its focus on supporting users in transforming data into RDF. However, using existing KGC frameworks result in complex data processing pipelines, which mix structural and semantic mappings, whose development and maintenance constitute a significant bottleneck for KG engineers. Such frameworks force users to rely on different tools, sometimes based on heterogeneous languages, for inspecting sources, designing mappings, and generating triples, thus making the process unnecessarily complicated. We argue that it is possible and desirable to equip KG engineers with the ability of interacting with Web data formats by relying on their expertise in RDF and the well-established SPARQL query language [2]. In this article, we study a unified method for data access to heterogeneous data sources with Facade-X, a meta-model implemented in a new data integration system called SPARQL Anything. We demonstrate that our approach is theoretically sound, since it allows a single meta-model, based on RDF, to represent data from (a) any file format expressible in BNF syntax, as well as (b) any relational database. We compare our method to state-of-the-art approaches in terms of usability (cognitive complexity of the mappings) and general performance. Finally, we discuss the benefits and challenges of this novel approach by engaging with the reference user community.
2023, Abstract in atti di convegno, ENG
Koivula, Hanna and Wohner, Christoph and Magagna, Barbara and Tagliolato Acquaviva d'Aragona, Paolo and Oggioni, Alessandro
Biodiversity and ecosystems cannot be studied without assessing the impacts of changing environmental conditions. Since the 1980s, the U.S. National Science Foundation's Long Term Ecological Research (LTER) Network has been a major force in the field of ecology to better understand ecosystems. In Europe, the LTER developments are led by the the Integrated European Long-Term Ecosystem, critical zone and socio-ecological system Research Infrastructure (eLTER RI), a currently project-based infrastructure initiative with the aim to facilitate high impact research and catalyse new insights about the compounded impacts of climate change, biodiversity loss, soil degradation, pollution, and unsustainable resource use on a range of European ecosystems and socio-ecological systems. The European LTER network, which forms the basis for the up-coming eLTER RI, is active in 26 countries and has 500 registered sites that provide legacy data e.g., historical time-series data about the environment (not only biodiversity). Its site information and dataset metadata with the measured variables are available to be searched at the Dynamic Ecological Information Management System - Site and dataset registry (DEIMS-SDR, Wohner et al. 2019). While DEIMS-SDR data models utilize parts of the Ecological Metadata Language (EML) schema 2.0.0, location information follows the European INSPIRE specification. The future eLTER data is planned to consist of site-based, long-term time-series of ecological data. The eLTER projects have defined eLTER Standard Observations (SO), which will include the minimum set of variables as well as the associated method protocols that can characterise adequately the state and future trends of the Earth's systems. (Masó et al. 2020, Reyers et al. 2017). The current eLTER network consists of sites that differ in terms of infrastructure maturity or environment type and may focus on one or several of the future SOs or they are not yet executing any holistic monitoring scheme. The main objective is to convert the eLTER site network into a distributed research infrastructure that incorporates a clearly outlined mandatory monitoring program. Essential to this effort are the suggested variables for eLTER SOs and the corresponding methods and protocols for relevant habitat types according to the European Nature Information System (EUNIS) in each domain. eLTER variables are described by using the eLTER thesaurus "EnvThes". These descriptions are currently enhanced by the use of the InteroperAble Descriptions of Observable Property Terminology (I-ADOPT, Magagna et al. 2022) framework to provide the necessary level of detail required for seamless data discovery and integration. Variables and their associated methods and protocols will be formalised to enable automatic site classifications, by building on existing observation representations such as the Extensible Observation Ontology (OBOE), Open Geospatial Consortium's Observation and Measurement, and the future eLTER Standard Observation ontology. DEIMS-SDR will continue to be used as a core service with an RDF representation of its assets (sites, sensors, activities, people) currently being implemented. This action is synced with the Biodiversity Digital Twin (BioDT) project to ensure maximum findability, accessibility, interoperability and re-usability (FAIRness; Wilkinson et al. 2016) of data through FAIR Digital Objects (FDO). Other (digital) assets such as datasets, models and analytical workflows will be documented in the Digital Asset Register (DAR) alongside semantic mapping and crosswalk techniques, to provide machine-actionable metadata (Schultes and Wittenburg 2019, Schwardmann 2020). The Biodiversity Digital Twin (BioDT) project is bringing together biodiversity and natural environment data from seven thematic use cases for modeling. BioDT prototypes rely on openly available data that comes from multiple heterogeneous sources using a multitude of standards and formats. In the pilot phase, merging data requires "hand picking" from selected sources, and automation of workflows would still require many additional steps. There are ongoing efforts in both the BioDT and eLTER projects to find best ways and practices to bring the raw data together by using suitable standards but also to harmonise the other environment variables by referring to vocabularies and possibly express the data as FDOs. Currently both the EML schema and Darwin Core standard (Darwin Core Task Group 2009; with registered extensions) allow referring to external schemas and vocabularies, which give flexibility but may still prove to be too narrow for the multitude of data types and formats the natural environment data requires. We welcome discussion about how to create good practices for enriching and harmonising natural environment data and species occurrence data in a meaningful way. GBIF's new data model and enriching the raw data with semantic artefacts may prove to be the way to provide thematic data products that combine data from multiple sources.
2023, Monografia o trattato scientifico, ENG
Meghini C.; Bartalesi Lenzi V.
The Web makes a very large amount of information available to users in the form of documents. The Semantic Web is a fundamental extension of the web as it allows, in addition to documents, the sharing of data (including document metadata) in a standard format along with their semantic context expressed in a formal and shared language. Applications in documentary science, biology, cultural heritage and electronic commerce have already demonstrated the validity of this approach. This volume constitutes a gentle introduction to the technologies and languages of the semantic web, clearly illustrating the steps necessary to transform a product published on the web into a set of data that can be processed and reused across applications, users and communities. This is the second monograph of the ebook series "Digital Culture Notebooks" edited by the Laboratory of Digital Culture of the University of Pisa (http://www.labcd.unipi.it) and published by Simonelli editore. The series houses short monographs on tools and research in the field of Digital Humanities which emerged from the work of teachers and students who collaborate with the Laboratory itself. It aims to support a wider dissemination of digital culture, understood as the field in which the humanities and some sectors of informatics interact and collaborate.
2022, Abstract in atti di convegno, ENG
M. Alfè 1, V. Gargiulo 1, A.A. Abe 2, P. Calandra 3, P. Caputo 2, A. Le Pera 4, V. Loise 2, R. Migliaccio 1, M. Porto 2, C. O. Rossi 2, M. Urciuolo 1, R. Vaiana 2, G. Ruoppolo 1
Refuse Derived Fuels (RDFs) are generated from municipal solid wastes (MSWs) thought a combined mechanical-biological processing. The narrower chemico-physical characteristics make RDFs more suitable than MSWs for thermochemical valorisation purposes. For this reason EU regulations encourage the use of RDF as a source of energy in the frameworks of sustainability and circular economy. Pyrolysis and gasification are promising thermochemical processes for RDF treatment, since, with respect to incineration, they ensure an increase in energy recovery efficiency, a reduction of pollutants emissions, and the production of value-added products as chemical platforms or fuels. In this work, the results on pyrolysis tests on a real RDF rich in plastic- and cellulose-based materials are reported. Pyrolysis tests have been performed in a tubular reactor up to three final temperatures (550, 650 and 750°C) and the resulting gaseous, condensable and solid products have been analysed in terms of yield, chemico-physical characteristics and energy recovery to highlight how this thermochemical conversion process can be used to accomplish waste to materials and waste to energy targets. RDF pyrolysis produces three products (gas, char and pyrolysis oil) with specific chemico-physical characteristics exploitable in unconventional technological applications. Among the three products, the most abundant and also the most promising in terms of possible applications is the condensable species fraction, whose highest yield was achieved at 550°C. The massive presence of waxes makes this fraction a potential candidate for the replacement of fossil-fuel based material in bitumen and asphalt processing and rejuvenation. It is worth of noting also that the final pyrolysis temperature has a strong influence on the segregation of some critical species such as S and N in the char opening to its re-use as adsorbent, catalyst, material for energy harvesting devices and additives for pavement industry depending on its composition. The use of pyrolysis products for asphalt preparation is an emerging research topic and opens to an alternative use of pyrolysis products (liquids and solids) outside of fuel and chemicals industries and to the replacement of petroleum-derived products (e.g. crude oil) with products deriving from waste thermoconversion.
2022, Rapporto tecnico, ENG
Lippolis, Anna Sofia [ISTC-CNR]; Lodi, Giorgia [ISTC-CNR]; Nuzzolese, Andrea Giovanni [ISTC-CNR]; Carletti, Gianluca [ARIA SpA]; Giulianelli, Elio [ISPRA]; Picone, Marco [ISPRA]; Settanta, Giulio [ISPRA]
This deliverable introduces the data pre-processing that is necessary to be carried out at data providers sides in order to prepare the data to be transformed in the WHOW knowledge graph
2021, Rapporto tecnico, ENG
Anna Sofia Lippolis, Giorgia Lodi, Andrea Giovanni Nuzzolese
The main goal of the WHOW project is to build an open and distributed knowledge graph that is capable of integrating and standardising heterogeneous data of the environmental and health domains coming from several data sources and available in different formats and structures. In particular, through identified business use cases, the project aims at creating a large knowledge base capable of linking data about water consumption and pollution with health parameters (e.g., disease spreading). The ultimate goal is fostering the creation of innovative applications, services and studies on top of the WHOW knowledge graph. Besides the business use cases that are going to be extensively described in deliverable D2.1 to be released at the end of December 2021, a key aspect of the WHOW project is the design of a fully distributed technical architecture for effective creation and publication of the WHOW open and distributed knowledge graph. The technical architecture, that can be adopted by newcomers who want to contribute to the WHOW knowledge graph, consists of two main macro-elements: - a set of semantic resources including ontologies and linked open data that are designed and produced to provide a shared semantics and standard for representing heterogeneous data of different actors and domains (i.e., water, health); - a set of software components that, using the earlier cited semantic resources, are able to provide: (i) data consumers with tools for consuming data, and related data models, via human and machine based interaction services; (ii) data providers with a technical architecture that offers software services for a sustainable data management process. This deliverable, which represents an important milestone (#7) of the project, focuses on the design of the technical architecture and describes: - a set of technical use cases that define how different types of users (data consumers and data providers) can interact with, and leverage the functionalities offered by, the WHOW technical architecture; - the functional and non-functional requirements that are derived from each technical use case and used in synergies with the business use cases of the project; - the high level view of the architecture, made up of software services and semantic resources; - a component based design of the architecture that illustrates the interfaces used in the interactions among the architectural components; - the process that is enabled in the construction of the ontologies and controlled vocabularies of the WHOW knowledge graph. The technical formalism that has been used to visually represent the design of the architecture is the Unified Modeling Language (UML), a well-known and widely used modelling language in software engineering projects. Therefore, UML use case, component and activity diagrams are introduced throughout the deliverable.
2021, Rapporto tecnico, ITA
A. Messina, U. Maniscalco, P. Storniolo
In questo lavoro, metteremo a confronto TypeDB con gli standard per il Semantic Web, concentrandoci in particolare su RDF, XML, RDFS, OWL, SPARQL e SHACL. Esistono alcune somiglianze chiave tra questi due insiemi di tecnologie, principalmente perché entrambi sono radicati nel campo dell'IA simbolica, della rappresentazione della conoscenza e del reasoning automatico. Queste somiglianze includono: 1. Entrambi consentono agli sviluppatori di rappresentare ed eseguire query su set di dati complessi ed eterogenei. 2. Entrambi danno la possibilità di aggiungere semantica a set di dati complessi. 3. Entrambi consentono all'utente di eseguire ragionamenti deduttivi automatizzati su grandi quantità di dati. Tuttavia, esistono differenze fondamentali tra queste tecnologie, poiché sono state progettate per diversi tipi di applicazioni. Nello specifico, il Semantic Web è stato ideato, appunto, per il Web, con dati incompleti provenienti da molte fonti, dove chiunque può contribuire alla definizione e mappatura tra le fonti di informazione. TypeDB, al contrario, non è stato creato per condividere dati sul web, ma per funzionare come database transazionale per organizzazioni "chiuse". Per questo motivo, confrontare le due tecnologie, a volte, potrebbe essere fuorviante. Queste differenze possono essere riassunte nel modo seguente: 1. Rispetto al Semantic Web, TypeDB riduce la complessità mantenendo un alto grado di espressività. Con TypeDB, evitiamo di dover apprendere diversi i standard del Semantic Web, ciascuno con alti livelli di complessità. Ciò consente di essere produttivi più velocemente. 2. TypeDB fornisce un'astrazione di livello superiore per lavorare con dati complessi rispetto agli standard Semantic Web. Con RDF modelliamo il mondo in triple, che è un modello di dati di livello inferiore rispetto allo schema a livello di concetto di relazione tra entità di TypeDB. Modellazione e query per relazioni di ordine superiore e dati complessi sono nativi in TypeDB. 3. Gli standard Semantic Web sono costruiti per il Web, TypeDB funziona per sistemi chiusi con dati privati. I primi sono stati progettati per funzionare con i dati collegati su un Web aperto con dati incompleti, mentre TypeDB funziona come un tradizionale sistema di gestione di database in un ambiente chiuso. In questo lavoro metteremo in evidenza che ci sono forti sovrapposizioni nel modo in cui entrambe le tecnologie offrono strumenti per la rappresentazione della conoscenza e il reasoning automatico e illustreremo i concetti più importanti ad alto livello senza entrare troppo nei dettagli. L'obiettivo è aiutare gli utenti con un background RDF/OWL a familiarizzare con TypeDB.
2021, Poster, ENG
Perego R.; Pibiri G.E.; Venturini R.
The sheer increase in volume of RDF data demands efficient solutions for the triple indexing problem, that is devising a compressed data structure to compactly represent RDF triples by guaranteeing, at the same time, fast pattern matching operations. This problem lies at the heart of delivering good practical performance for the resolution of complex SPARQL queries on large RDF datasets. We propose a trie-based index layout to solve the problem and introduce two novel techniques to reduce its space of representation for improved effectiveness. The extensive experimental analysis reveals that our best space/time trade-off configuration substantially outperforms existing solutions at the state-of-the-art, by taking 30-60% less space and speeding up query execution by a factor of 2-81 times.
2020, Contributo in atti di convegno, ENG
M. Fiorelli and A. Stellato and T. Lorenzetti and A. Turbati and P. Schmitz and E. Francesconi and N. Hajlaoui and B. Batouche
OntoLex-Lemon is a collection of RDF vocabularies for specifying the verbalization of ontologies in natural language. Beyond its original scope, OntoLex-Lemon, as well as its predecessor Monnet lemon, found application in the Linguistic Linked Open Data cloud to represent and interlink language resources on the Semantic Web. Unfortunately, generic ontology and RDF editors were considered inconvenient to use with OntoLex-Lemon because of its complex design patterns and other peculiarities, including indirection, reification and subtle integrity constraints. This perception led to the development of dedicated editors, trading the flexibility of RDF in combining different models (and the features already available in existing RDF editors) for a more direct and streamlined editing of OntoLex-Lemon patterns. In this paper, we investigate on the benefits gained by extending an already existing RDF editor, VocBench 3, with capabilities closely tailored to OntoLex-Lemon and on the challenges that such extension implies. The outcome of such investigation is twofold: a vertical assessment of a new editor for OntoLex-Lemon and, in the broader scope of RDF editor design, a new perspective on which flexibility and extensibility characteristics an editor should meet in order to cover new core modeling vocabularies, for which OntoLex- Lemon represents a use case.
2020, Contributo in atti di convegno, ENG
M. Fiorelli, A. Stellato, T. Lorenzetti, A. Turbati, P. Schmitz, E. Francesconi, N. Hajlaoui, B. Batouche
OntoLex-Lemon is a collection of RDF vocabularies for specifying the verbalization of ontologies in natural language. Beyond its original scope, OntoLex-Lemon, as well as its predecessor Monnet lemon, found application in the Linguistic Linked Open Data cloud to represent and interlink language resources on the Semantic Web. Unfortunately, generic ontology and RDF editors were considered inconvenient to use with OntoLex-Lemon because of its complex design patterns and other peculiarities, including indirection, reification and subtle integrity constraints. This perception led to the development of dedicated editors, trading the flexibility of RDF in combining different models (and the features already available in existing RDF editors) for a more direct and streamlined editing of OntoLex-Lemon patterns. In this paper, we investigate on the benefits gained by extending an already existing RDF editor, VocBench 3, with capabilities closely tailored to OntoLex-Lemon and on the challenges that such extension implies. The outcome of such investigation is twofold: a vertical assessment of a new editor for OntoLex-Lemon and, in the broader scope of RDF editor design, a new perspective on which flexibility and extensibility characteristics an editor should meet in order to cover new core modeling vocabularies, for which OntoLex-Lemon represents a use case.
2020, Articolo in rivista, ENG
Pibiri G.E.; Perego R.; Venturini R.
The sheer increase in volume of RDF data demands efficient solutions for the triple indexing problem, that is to devise a compressed data structure to compactly represent RDF triples by guaranteeing, at the same time, fast pattern matching operations. This problem lies at the heart of delivering good practical performance for the resolution of complex SPARQL queries on large RDF datasets. In this work, we propose a trie-based index layout to solve the problem and introduce two novel techniques to reduce its space of representation for improved effectiveness. The extensive experimental analysis, conducted over a wide range of publicly available real-world datasets, reveals that our best space/time trade-off configuration substantially outperforms existing solutions at the state-of-the-art, by taking 30 - 60% less space and speeding up query execution by a factor of 2-81× .
2020, Articolo in rivista, ENG
Peroni, Silvio.; Ciancarini, Paolo.; Gangemi, Aldo.; Nuzzolese, Andrea Giovanni; Poggi, Francesco; Presutti, Valentina
In this article, we discuss the outcomes of an experiment where we analysed whether and to what extent the introduction, in 2012, of the new research assessment exercise in Italy (a.k.a. Italian Scientific Habilitation) affected self-citation behaviours in the Italian research community. The Italian Scientific Habilitation attests to the scientific maturity of researchers and in Italy, as in many other countries, is a requirement for accessing to a professorship. To this end, we obtained from ScienceDirect 35,673 articles published from 1957 to 2016 by the participants to the 2012 Italian Scientific Habilitation, that resulted in the extraction of 1,379,050 citations retrieved through Semantic Publishing technologies. Our analysis showed an overall increment in author self-citations (i.e. where the citing article and the cited article share at least one author) in several of the 24 academic disciplines considered. However, we depicted a stronger causal relation between such increment and the rules introduced by the 2012 Italian Scientific Habilitation in 10 out of 24 disciplines analysed.
2019, Software, ENG
Fugazza Cristiano
Reingegnerizzazione nel linguaggio Python del software Liftboy (https://intranet.cnr.it/servizi/people/prodotto/scheda/i/463275) originariamente realizzato in Java
2019, Software, ENG
Alessandro Oggioni
RDF FOAF Manufacturer list
2019, Contributo in atti di convegno, ENG
John P. McCrae, Fahad Khan, Ilan Kernerman, Thierry Declerck, Carole Tiberius, Monica Monachini, Sina Ahmadi
ELEXIS is a project that aims to create a European network of lexical resources, and one of the key challenges for this is the development of an interoperable interface for different lexical resources so that further tools may improve the data. This paper describes this interface and in particular describes the five methods of entrance into the infrastructure, through retrodigitization, by conversion to TEI-Lex0, by the TEILex0 format, by the OntoLex format or through the REST interface described in this paper. The interface has the role of allowing dictionaries to be ingested into the ELEXIS system, so that they can be linked to each other, used by NLP tools and made available through tools to Sketch Engine and Lexonomy. Most importantly, these dictionaries will all be linked to each other through the Dictionary Matrix, a collection of linked dictionaries that will be created by the project. There are five principal ways that a dictionary maybe entered into the Matrix Dictionary: either through retrodigitization; by conversion to TEI Lex-0 by means of the forthcoming ELEXIS conversion tool; by directly providing TEI Lex-0 data; by providing data in a compatible format (including OntoLex); or by implementing the REST interface described in this paper.
2019, Software, ENG
Alessandro Oggioni
Manufacturers list in RDF FOAF
2019, Contributo in volume, ENG
Asprino, Luigi; Gangemi, Aldo; Nuzzolese, Andrea Giovanni; Presutti, Valentina; Reforgiato Recupero, Diego; Russo, Alessandro
MARIO is an assistive robot that has to support a set of knowledge-intensive tasks aimed at increasing autonomy and reducing loneliness in people with dementia and supporting caregivers in their activity to assess patients' cognitive status. Examples of knowledge-intensive tasks are the comprehensive geriatric assessment (CGA) and the delivery of reminiscence therapy. In order to enable these tasks, MARIO features a set of abilities implemented by pluggable software applications. MARIO's abilities contribute to and benefit from a common knowledge management framework. For example, the ability associated with the CGA retrieves questions to be posed to the patient from the framework and stores the obtained answers and associated relevant metadata. In this work we presents the MARIO knowledge management software framework, which combines robotics with ontology-based approaches and Semantic Web technologies. It consists of (1) a set of interconnected and modularized ontologies, meant to model all knowledge areas that are relevant for MARIO abilities, and (2) a set of software interfaces that provide high-level access to the ontology network and its associated knowledge base. Finally, we demonstrate how the knowledge management framework supports the applications for CGA and reminiscence therapy, implemented on top of the knowledge base.
2018, Contributo in atti di convegno, ENG
Graziosi, Alice; Di Iorio, Angelo; Poggi, Francesco; Peroni, Silvio; Bonini, Luca
This paper is about web applications to browse and efficiently visualise large Linked Open Dataset (LOD). The focus is on the customisation of LOD views over semantic datasets also for non expert users. The paper presents the motivation and the details of a visual data format and a chain of tools to easily produce and customize such visualisations. Two proofs of concepts are also presented in order to demonstrate the feasibility and flexibility of our approach.
2018, Rapporto di progetto (Project report), ITA
Michela Costa, Daniele Piazzullo
Obiettivo dell'attività 2.2 è stata la messa a punto di modelli di sistemi di essiccamento di biomasse residuali disponibili a bordo nave da integrare modelli di sistema ovvero strumenti di simulazione di layout impiangtistici fnalizzati al recupero energetico e caratterizzati da elevata efficienza. Il processo di essiccamento indiretto è stato modellato con un sotto-modello svilupparto in ThermoflexTM. L'essiccamento diretto è stato invece modellato utilizzando sia un modello in-house che un sotto-modello ThermoflexTM relativo all'essiccamento diretto. I due strumenti di calcolo hanno forniscono in questo caso risultati eccezionalmente in accordo sia per la temperatura di essiccamento che per la portata di agente essiccante che determina l'evaporazione completa del contenuto di umidità presente nella biomassa. Si sono considerate RDF, FORS e fanghi di depurazione delle acque reflue e si è formulata l'ipotesi di essiccamento ad aria, a vapore acqueo (solo per essiccamento indiretto) e ad opera dei gas di scarico di un motore termico, opportunamente diluiti con aria fresca per giungere a temperature compatibili con condizioni di esercizio in sicurezza. Lo studio effettuato ha evidenziato due aspetti fondamentali: i gas di scarico nell'essiccamento diretto si comportano in maniera meno efficace dell'aria finché l'effetto negativo dovuto alla presenza di vapore acqueo nella loro composizione, che limita il processo di evaporazione dell'umidità della biomassa, è preponderante rispetto all'effetto positivo derivante dalle differenti proprietà termodinamiche, che determina un innalzamento della temperatura di essiccamento. Il peso tra i due effetti è diverso a seconda del contenuto di umidità iniziale della biomassa. Per RDF il confronto con acqua ha evidenziato che i gas di scarico sono meno efficaci su tutto il range di temperature dell'agente essiccante che è stato investigato. Per FORS e fanghi esiste un valore di temperatura per il quale i gas iniziano ad essere preferibili all'aria. Il confronto tra essiccamento diretto e indiretto ha mostrato che, a seconda della temperatura dell'agente essiccante, una tipologia è preferibile rispetto all'altra. Esiste cioè un valore di tale parametro per il quale il comportamento dei due sistemi (diretto e indiretto) si inverte. La scelta dell'una o dell'altra tipologia deve comunque essere effettuata non solo tenendo presente i risultati di calcoli come quelli qui effettuati, ma anche in modo da verificare i vincoli per la sicurezza rispetto ad eventuali auto-combustioni che potrebbero derivare dal rilascio di sostanze volatili dalla biomassa e dal loro contatto con ossigeno a temperature tali da determinarne appunto l'autoaccensione. Il lavoro svolto si è concluso con un dimensionamento di massima di un essiccatore a processo indiretto per FORS o fanghi di depurazione, relativo ad una portata corrispondente ad un ipotetico modulo di 1000 passeggeri di una moderna nave da crociera. Si è ipotizzato un essiccamento non completo ma tale da ridurre al 20% il contenuto della biomassa da sottoporre a gassificazione, in linea con le indicazioni di letteratura.
2018, Articolo in rivista, ENG
Fugazza, Cristiano; Pepe, Monica; Oggioni, Alessandro; Tagliolato, Paolo; Carrara, Paola
Geospatial metadata are often encoded in formats that either are not aimed at efficient retrieval of resources or are plainly outdated. Particularly, the quantum leap represented by the Semantic Web did not induce so far a consistent, interlinked baseline in the geospatial domain. Datasets, scientific literature related to them, and ultimately the researchers behind these products are only loosely connected; the corresponding metadata intelligible only to humans, duplicated in different systems, seldom consistently. We address these issues by relating metadata items to resources that represent keywords, institutes, researchers, toponyms, and virtually any RDF data structure made available over the Web via SPARQL endpoints. Essentially, our methodology fosters delegated metadata management as the entities referred to in metadata are independent, decentralized data structures with their own life cycle. Our example implementation of delegated metadata envisages: (i) editing via customizable web-based forms (including injection of semantic information); (ii) encoding of records in any XML metadata schema; and (iii) translation into RDF. Among the semantics-aware features that this practice enables, we present a worked-out example focusing on automatic update of metadata descriptions. Our approach, demonstrated in the context of INSPIRE metadata (the ISO 19115/19119 profile eliciting integration of European geospatial resources) is also applicable to a broad range of metadata standards, including non-geospatial ones.