https://prezi.com/embed/cfcvd0wnx1uy/?bgcolor=fffffflock_to_path=0autoplay=0autohide_ctrls=0landing_data=bHVZZmNaNDBIWnNjdEVENDRhZDFNZGNIUE43MHdLNWpsdFJLb2ZHanI5a2dPYjZRcFFhczZBQUNvMWVwa0g4ajV3PT0landing_sign=1qEsK_JTVQmufpQJeDQUVGGXF87qe1jYl0MsFelGZww
01-a-overview-modules.png <show/>
01-core-interaction-model(subject-context).png <show/>
01-core-structure.png <show/>
02-complete-UML-diagram.png <show/>
03-webvowl-preview.png
04-OWLGrEd-preview.png
The BCI ontology specifies a foundational metadata model set for real-world multimodal Brain-Computer Interaction (BCI) data capture activities.Its structure depicts a conceptual framework that BCI applications can extend and use in their implementations, to define core concepts that capture a relevant and interoperable metadata vocabulary. This ontology is aligned to the Semantic Sensor Network Ontology (SSN): a domain-independent and end-to-end model for sensor/actuator applications. Hence, its structure has been normalized to assist its use in conjunction with other ontologies or linked data resources to specify any particular definitions (such as units of measurement, time and time series, and location and mobility), that specialized applications in the BCI domain might need. Also, this spec provides general alignment data modeling guidelines for core concepts, to help BCI applications in their design.
Dave Beckett
Nikki Rogers
Participants in W3C's Semantic Web Deployment Working Group.
2014-03-27
2017-04-17
Alistair Miles
Sean Bechhofer
Sergio José Rodríguez Méndez. Pervasive Embedded Technologies Laboratory (PET Lab), Computer Science Department, NCTU, Taiwan. John K. Zao. Pervasive Embedded Technologies Laboratory (PET Lab), Computer Science Department, NCTU, Taiwan and CerebraTek, Taiwan.
An RDF vocabulary for describing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, 'folksonomies', other types of controlled vocabulary, and also concept schemes embedded in glossaries and terminologies.
The BCI ontology (BCI-O) provides a high level semantic structure specialized metadata vocabulary set for real-world multimodal BCI data capture activities. It defines a minimalist and simple abstract metadata foundational model for real-world BCI applications that monitors human activity in any scenario. BCI multimodal domain applications are encouraged to extend and use this ontology in their implementations. BCI-O was developed following W3C Semantic Web ontology standards and guidelines, so that BCI applications can express reusable, interoperable and extendable machine-readable BCI metadata models, especially in pervasive M2M environments. For this purpose, its design was aligned to the Semantic Sensor Network Ontology (SSN), following closely its Stimulus-Sensor-Observation Ontology Design Pattern. The core set of relevant metadata definitions for real-world BCI activities were taken from different proposed vocabularies and formats in the BCI domain, such as: XDF, ESS and HED. BCI-O concepts are logically grouped into 11 modules. Each module represents a central topic of the ontology structure where the related concepts give a consistent explanation about its functional data model. The modules are: (*) Subject: concepts related to the depiction of a human being (or human subject) engaging in an activity and its associate state. (*) Context: captures the architectural description of an environment (or context). A human being interacts with a context. (*) Session: represents the interaction between a subject and a context while performing a single activity, under specific settings and conditions. (*) Observations (was SSN-Skeleton): specific concepts for BCI activities aligned to the SOSA/SSN axioms for modeling Observations (the initial alignment was to the Skeleton of [oldSSN]). Metadata related to records, modality types (such as EEG), channeling information, output streams (file formats and access) and stimulus events, are found in this module. (*) Sensors (was SSN-Device): specific related concepts for BCI activities aligned to the SOSA/SSN axioms for modeling Sensors (under Observations) (the initial alignment was to the Device module of [oldSSN]). Metadata related to devices and their channeling specification are found in this module. (*) System Capabilities (was SSN-MeasurementCapability): specific related concepts for BCI activities aligned to the SSN horizontal segmentation module for System Capabilities (the initial alignment was to the Measurement Capability module of [oldSSN]). Metadata related to channels and other measurement properties are found in this module. (*) Results (was SSN-Data): specific related concepts for BCI activities aligned to the SOSA axioms for modeling Results (the initial alignment was to the Data module of [oldSSN]). Metadata related to data blocks, recorded data, and actuation results are defined in this module. (*) data tagging). (*) Actuation: specific related concepts for BCI activities aligned to the SOSA axioms and SAN axioms for modeling Actuations. Similarly as described in [Seydoux2016], this module depicts how a subject can interact with the physical/virtual world (context) in BCI activities. Its main classes, actuator and actuation, are modeled following the Actuation-Actuator-Effect (AAE) design pattern: a core model for the IoT Ontology (IoT-O). (*) Descriptor: a descriptor represent an external resource set that extends the description of entities in the ontology. A descriptor complements the information associated to the relevant metadata set, defined in this ontology. (*) EEG: specific concepts for EEG (Electroencephalography) applications. As one of the ontologies ("concept producers") that reuse SSN, BCI-O was selected as part of the analysis on the usage of SSN. This spec has been registered in the following open repositories: [<caption>Open Repositories where BCI-O can be accessed</caption> [ [ <th style="width: 25%; text-align: center; Repository <th style="width: 50%; text-align: center; Entry <th style="width: 25%; text-align: center; Description ] ] [ [ [w3id.org github] [https://github.com/perma-id/w3id.org/tree/master/BCI-ontology] [ Permanent URI for the WWW] ] [ [Linked Open Vocabularies] [http://lov.okfn.org/dataset/lov/vocabs/bci] [ LOD community] ] [ [BioPortal] [http://bioportal.bioontology.org/ontologies/BCI-O] [ BioMedical community] ] ] ] Some early BCI-O applications are presented at <a class="other the end of the spec.
This ontology is based on the SSN Ontology by the W3C Semantic Sensor Networks Incubator Group (SSN-XG), together with considerations from the W3C/OGC Spatial Data on the Web Working Group.
https://w3id.org/BCI-ontology#
2014-03-27
2018-06-07T23:54:36
(*) [Compton2009] Compton, M.; Neuhaus, H.; Taylor K. and Tran, K. Reasoning about Sensors and Compositions. In Proceedings of the 2nd International Workshop on Semantic Sensor Networks (SSN 09) at ISWC 2009, pp. 33-48, 2009. URL=http://ceur-ws.org/Vol-522/p7.pdf. (*) [ESS] SCCN, "EEG Study Schema (ESS)". Resources: [ESS@SCCN], [ESS v2.0]. (*) [HED] N. Bigdely-Shamlo, K. Kreutz-Delgado, M. Miyakoshi, M. Westerfield, T. Bel-Bahar, C. Kothe and K. Robbins, "Hierarchical event descriptor (HED) tags for analysis of event-related EEG studies". Resources: [HED@SCCN], [HED v2.0]. (*) [OWL-Time] OGC W3C, "Time Ontology in OWL (W3C Recommendation 19 October 2017)", https://www.w3.org/TR/owl-time/ (*) [Seydoux2016] Seydoux, Nicolas; Drira, Khalil; Hernandez, Nathalie; Monteil, Thierry. IoT-O, a Core-Domain IoT Ontology to Represent Connected Devices Networks. 20th International Conference on Knowledge Engineering and Knowledge Management - Volume 10024 (EKAW 2016). Bologna, Italy. 2016. Pp. 561-576. DOI=http://dx.doi.org/10.1007/978-3-319-49004-5_36. URL=https://dl.acm.org/citation.cfm?id=3092997. See also: (*) [SAN] The "Semantic Actuator Network (SAN)" ontology. http://lov.okfn.org/dataset/lov/vocabs/SAN. (*) [AAE] The "Actuation-Actuator-Effect (AAE)" design pattern. http://ontologydesignpatterns.org/wiki/Submissions:Actuation-Actuator-Effect http://ontologydesignpatterns.org/wiki/Submissions:Actuation-Actuator-Effect. (*) [Shafer2001] Shafer, Steven A. N.; Brumitt, Barry; Cadiz, J. J. Interaction Issues in Context-aware Intelligent Environments. Human-Computer Interaction. Volumen 16, Issue 2 (December 2001), Pp. 363-378. DOI=http://dx.doi.org/10.1207/S15327051HCI16234_16. URL=http://dl.acm.org/citation.cfm?id=1463124. (*) [SSN] OGC W3C, "Semantic Sensor Network (W3C Recommendation 19 October 2017)", https://www.w3.org/TR/vocab-ssn/.W3C Spatial Data on the Web Working Group, "W3C Spatial Data on the Web Working Group", https://www.w3.org/2015/spatial/wiki/Main_Page. (*) [oldSSN] W3C, "Semantic Sensor Network (SSN) Ontology", http://www.w3.org/2005/Incubator/ssn/ssnx/ssn.html.W3C Semantic Sensor Network Incubator Group, "Semantic Sensor Network XG Final Report", http://www.w3.org/2005/Incubator/ssn/XGR-ssn-20110628/. (*) [Unity] Unity Gaming Platform (http://unity3d.com/), Unity's Gaming Modeling Architecture Manual http://docs.unity3d.com/Manual/index.html (*) [XDF] C. Kothe and C. Brunner, "XDF (Extensible Data Format)", https://code.google.com/p/xdf/
Copyright 2014 - 2018 PET Lab, Computer Science Department, NCTU, Taiwan.
Copyright 2017 W3C/OGC.
http://www.essepuntato.it/lode/owlapi/https://w3id.org/BCI-ontology# (visualise the BCI-ontology with LODE)
Brain-Computer Interaction (BCI) Ontology
SKOS Vocabulary
Sensor, Observation, Sample, and Actuator (SOSA) Ontology
bci
sosa
http://www.w3.org/ns/sosa/
https://w3id.org/BCI-ontology#
The BCI ontology describes a framework of core concepts of the specialized metadata set for multimodal "Brain-Computer Interaction" (BCI) data capture activities. It is being developed by the "Pervasive Embedded Technologies" Laboratory (PET Lab) at the Computer Science Department of the National Chiao Tung University (NCTU), Taiwan (Republic of China, R.O.C). Its concepts and structure depict a foundational metadata model for BCI data capture activities, that BCI applications can extend and use in their implementations. Any feedback is welcome. Please mail it to srodriguez@pet.cs.nctu.edu.tw
The DOLCE+DnS Ultralite ontology.
It is a simplification of some parts of the DOLCE Lite-Plus library (cf. http://www.ontologydesignpatterns.org/ont/dul/DLP397.owl).
Main aspects in which DOLCE+DnS Ultralite departs from DOLCE Lite-Plus are the following:
- The names of classes and relations have been made more intuitive
- The DnS-related part is closer to the newer 'constructive DnS' ontology (http://www.ontologydesignpatterns.org/ont/dul/cDnS.owl).
- Temporal and spatial relations are simplified
- Qualities and regions are more relaxed than in DOLCE-Full: they can be used as attributes of any entity; an axiom states that each quality has a region
- Axiomatization makes use of simpler constructs than DOLCE Lite-Plus
- The architecture of the ontology is pattern-based, which means that DOLCE+DnS Ultralite is also available in modules, called 'content ontology design patterns', which can be applied independently in the design of domain ontologies (cf. http://www.ontologydesignpatterns.org). If many modules are needed in a same ontology project, it is anyway useful to use this integrated version.
The final result is a lightweight, easy-to-apply foundational ontology for modeling either physical or social contexts.
Several extensions of DOLCE+DnS Ultralite have been designed:
- Information objects: http://www.ontologydesignpatterns.org/ont/dul/IOLite.owl
- Systems: http://www.ontologydesignpatterns.org/ont/dul/SystemsLite.owl
- Plans: http://www.ontologydesignpatterns.org/ont/dul/PlansLite.owl
- Legal domain: http://www.ontologydesignpatterns.org/ont/dul/CLO/CoreLegal.owl
- Lexical and semiotic domains: http://www.ontologydesignpatterns.org/ont/lmm/LMM_L2.owl
- DOLCE-Zero: http://www.ontologydesignpatterns.org/ont/d0.owl is a commonsense-oriented generalisation of some top-level classes, which allows to use DOLCE with tolerance against ambiguities like abstract vs. concrete information, locations vs. physical artifacts, event occurrences vs. event types, events vs. situations, qualities vs. regions, etc.; etc.
DOLCE+DnS Ultralite
0.9.5,0.9.4,0.9.3,0.9.2,0.9.1,0.8.9,0.7.5,0.6.1
0.9.6
4.1
Created by Aldo Gangemi as both a simplification and extension of DOLCE and Descriptions and Situations ontologies.
In 3.2, the links between instances of Region or Parameter, and datatypes have been revised and made more powerful, in order to support efficient design patterns for data value modelling in OWL1.0.
Also, the names of the related properties have been changed in order to make them more intuitive.
Furthermore, a large comment field has been added to the 'expresses' object property, in order to clarify some issues about the many interpretations.
In 3.3, the relation between regions, parameters, and datatypes has been still improved.
In 3.5, the person-related classes have been refactored: Person in 3.4 is now SocialPerson, to avoid confusion with commonsense intuition; Person is now the union of social persons and humans, therefore being a subclass of Agent.
In 3.6, other fixes on universal restriction involving expresses. Also added the property 'isConstraintFor' between parameters and entities. Moved the properties: 'assumes' and 'adopts' to the new module: http://www.ontologydesignpatterns.org/ont/dul/Conceptualization.owl.
In 3.7, some fixes on the names of classes and properties related to FormalEntity; created a new separate module for general universal restrictions (DULGCI.owl).
In 3.8, more fixes on the interface to formal entities and links to IOLite.owl.
In 3.9, some naming and comment fixes.
In 3.10, removed cardinality restriction from hasPart and isPartOf restrictions (changed to hasComponent and isComponentOf), for OWL(DL) compatibility. Also enlarged the range of includesAgent to contain both social and physical agents, and of conceptualizes universal restriction on agents, to include all social objects.
In 3.11, some more subproperty axioms have been introduced, and all elements have got English labels.
In 3.12, added some classes to map some old DolceLitePlus classes that were used to align OntoWordNet.
In 3.13, added the LocalConcept class to express a Concept that cannot be used in a Description different from the one that defines it. Also updated some comments.
In 3.14, added some comments.
In 3.15, removed some owl:disjointWith axioms relating Collection to InformationObject, Description, Situation, and SocialAgent. The rationale for doing that is to allow less strict constraints on domain relations involving collections that can be also conceived as descriptions, situations, social agents, or information objects; for example: a collection of sentences from a text (an information object) that are ranked with a relevance criterion can be still considered a text.
In 3.16, name of isActedBy changed to actsThrough, which is clearer. Also added SpatioTemporalRegion as constituted by a SpaceRegion and a TimeInterval.
In 3.17, removed redundant universal axioms from Entity and other top classes. Fixed restrictions on FunctionalSubstance class, and comments in Design and Substance classes.
In 3.18, removed subClassOf axiom from FunctionalSubstance to DesignedArtifact, created a new subclass of FunctionalSubstance, called DesignedSubstance, and created a subClassOf axiom from DesignedSubstance to DesignedArtifact.
In 3.19, removed disjointness axiom between Concept and Collection (the same rationale applies as in 3.15 version.
In 3.20, revised the comment for Quality, added InformationEntity as the superclass for InformationObject and InformationRealization (represented as the union of those classes). This is needed in many domain ontologies that do not need to distinguish between abstract and concrete aspects of information entities. One possible revision (not implemented here) would be to introduce the relations: expresses and isAbout with a broader domain:InformationEntity, and two more specific properties: abstractlyExpresses and isAbstractlyAbout. This last revision has not been implemented yet, since a large revision procedure should be carried out in order to check the impact of the revision on the existing DOLCE-DnS-Ultralite plugins.
In 3.21, added comment to InformationEntity, and optimized representation of equivalence for InformationRealization.
In 3.22, added comment to Personification.
In 3.23, added associatedWith object property, and put all object properties as subproperties of it.
In 3.24, removed hasProxy datatype property.
In 3.25, generalized domain and range of hasComponent and isComponentOf.
In 3.26, updated some comments in order to clarify or exemplify the concepts.
In 3.27, added rdfs:isDefinedBy annotations for Linked Data browsers.
In 3.28, broadened the universe of pre-/post-conditions to give room to events and states.
In 3.29, added some properties to support DBpedia alignment: sameSettingAs (situational analogous to coparticipation), including relations originating e.g. from sharing kinship, ownership, or roleplaying situations.
In 3.30, completed some domains and ranges (formerly owl:Thing, now dul:Entity), and added axiom: Organism subClassOf PhysicalAgent.
In 3.31, added a restriction to Quality and one to Region in order to ensure the original DOLCE constraint of qualities being always associated with a region, and vice versa. These axioms do not however exclude a direct applicability of qualities or regions to any other entity.
In 3.32, removed redundant union axioms and some restrictions, which spot a negative trade-off between expressivity and complexity.
In 3.33, added the ObjectAggregate class, added two property chains for coparticipation and same situation setting, updated some comments, added an axiom to Transition.
In 3.34, extended mereological support for parthood, introducing hasPropertPart (transitive) as a middle property between hasPart (transitive and reflexive) and hasComponent (asymmetric). This solution uses then "reflexive reduction" and "transitive reduction" design patterns (they allow to grant property characteristics through the superproperties, but not in the subproperties). Technically, mereology axioms would require that also hasProperPart be asymmetric, however a direct subproperty of an OWL non-simple property (hasPart) cannot be also asymmetric, hence the approximation.
Added a n-ary parthood class in order to suggest an alternative pattern for time- (and space-)indexed part relations. In order to ensure that property characteristics hold also with parthood n-ary, a property chain is introduced which infers a direct dul:partOf property for each parthood individual.
Added a dul:realizesSelfInformation propery in order to enable local reflexivity ('Self') axioms for all information realizations.
In 4.0, some foundational changes are introduced.
- Firstly, the temporally indexed versions of some properties are introduced as subclasses of Situation (following the n-ary relation pattern), so covering relations from DOLCE that were skipped because of their larger arity. -
- Secondly, D&S's Situation class is extracted from DOLCE top-level distinctions (it used to be a subclass of SocialObject), put as a primitive class under Entity, and not disjoint from any other class. Since we are relaxing the semantics of Situation, this change is fully compatible with previous versions of DUL.
The reason for the change is that it may sound counterintuitive (as many have noticed) to assume a descriptive commitment for situations, but not for events or states.
In fact, D&S provides an epistemological commitment to an ontology, independently from its foundational distinctions. A situation operationalizes that epistemology, and it is better not to put it under any foundational distinction (event, object, fluent, etc.), leaving to the designer whether to use descriptions as epistemological lenses, and so generating a situation, or not.
A consequence is that any entity, when 'framed' by (satisfying) a description, becomes a situation. We can still model entities as being in a situation's setting, and classified by a concept defined in a description.
In 4.1, also the disjointness between Description and Concept has been dropped, in order to unify the projections of an intensional relation with its mereological dependencies. Until now, when a description d1 is part of another description d, we cannot model d1 as a concept defined or used by d, even though it is totally reasonable to consider it as such, i.e., playing a role in d. The impediment is due to the disjointness between being Description and Concept. By dropping that axiom, we can operate on descriptions and concepts more flexibly.
A compositional property, hasInScope, is introduced to link situations that are 'diagonally' modelled through a description, e.g., when a situation s1 involves a description d1 (diagonal meta-level), which is satisfied by a situation s2, s1 hasInScope s2.
Legal reasoning is a relevant example: a case in point (s2) may satisfy two legal norms (d1 and d2) that are conflicting according to a meta-norm d; d3 can be satisfied by a situation s that 'interprets' (involves) both d1 and d2 against s2. hence, s hasInScope s2. This may also happen wehen the two conflicting descriptions are satisfied by alternative situations sharing important elements, as in perspectival reasoning: the meta-description in this case has in scope both alternative situations.
Mappings to SAN: The Actuation Model of BCI-O was developed based on the following premises: (*) Aims to integrate and reconcile the SOSA axioms [SSN] and SAN axioms [SAN] for modeling actuations and actuators. (*) Follows closely the proposed Actuation-Actuator-Effect (AAE) design pattern [AAE]: a core model for the IoT Ontology (IoT-O). The following diagram depicts the BCI-O alignment to SAN. <DCMIType-StillImage>06-Actuation-Model-Alignment-to-SAN.jpg</DCMIType-StillImage> As a broad application domain ontology for BCI activities, BCI-O integrates and refines some modeling considerations of the SOSA and SAN concepts regarding actuations and actuators. One example is the ActuationEvent alignment to san:Effect (or san:ActuatorOutput): (*) A san:Effect defines any kind of physical modification (an effect on the physical world -- Context ) induced by an actuator (a characteristic of its nature, as an agent that has an effect on the physical world). (*) An ActuationEvent is a Context.Event triggered by an Actuator that changes the state of the ActuationTarget (which is a Context.Object). Another inherited modeling perspective for BCI, comes from the definition of san:impacts object property: [san:Effect] -------- (san:impacts) -------- [oldssn:Property] The BCI-O alignment to SAN allows the following inferred relationship: [ActuationEvent] ---- (san:impacts) ---- [ImpactedProperty] The SOSA-SAN integrated Actuation Model of BCI-O represents a major contribution to the IoT and BCI communities.
Mappings to SOSA/SSN: At the beginning, the BCI ontology was developed following closely its alignment to the [oldSSN] spec. By mid-July 2017, as the new version of SSN released by W3C ([SSN]) had reached the Candidate Recommendation <a title="Semantic Sensor Network Ontology | W3C Candidate Recommendation (11 July 2017) status, the core concepts were "remapped" to the new SOSA/SSN definitions based on its <a title="Semantic Sensor Network Ontology | W3C Recommendation (19 October 2017): (6) Vertical Segmentation -- (6.2) SSNX Alignment Module SSNX Vertical Alignment: [ [ [<th style="text-align: center;" width="100 bci<th style="text-align: center;" width="60 (mapping)<th style="text-align: center;" width="160 Initial mapping: oldssn<th style="text-align: center;" width="170 New mapping: SOSA/SSN<th style="text-align: center;" width="335 SSNX & BCI-O Remarks] [<th style="text-align: center;" colspan="5 Classes] [ [bci:Aspect] [] [oldssn:FeatureOfInterest] [sosa:FeatureOfInterest] [ (distinction between observation and actuation targets)] ] [ [bci:Modality] [] [oldssn:Property] [sosa:ObservableProperty] [(oldssn:Property ssn:Property);(sosa:ObservableProperty ssn:Property)] ] [ [bci:StimulusEvent] [] [oldssn:Stimulus] [ssn:Stimulus] [] ] [ [bci:Device] [] [oldssn:SensingDevice] [sosa:Sensor] [(sosa:Sensor oldssn:Sensor); (oldssn:SensingDevice oldssn:Sensor)] ] [ [bci:Record] [] [oldssn:Observation] [sosa:Observation] [oldssn:Observation: combination of oldssn axiomatic statements.] ] [ [bci:RecordedData] [] [oldssn:SensorOutput] [sosa:Result] [(oldssn:SensorOutput sosa:Result) and (combination of oldssn axiomatic statements); (alignment to sosa:Result)] ] [ [bci:DataBlock] [] [oldssn:ObservationValue] [sosa:Result] [(oldssn:ObservationValue sosa:Result) and (combination of oldssn axiomatic statements); (alignment removed)] ] [ [bci:Channel] [] [oldssn:MeasurementCapability] [ssn-system:SystemCapability] [(oldssn:MeasurementCapability ssn-system:SystemCapability) and (combination of oldssn axiomatic statements)] ] [ [bci:NonChannel] [] [oldssn:MeasurementCapability] [ssn-system:SystemCapability] [(oldssn:MeasurementCapability ssn-system:SystemCapability) and (combination of oldssn axiomatic statements)] ] [ [bci:SamplingRate] [] [oldssn:Frequency] [ssn-system:Frequency] [(oldssn:Frequency ssn-system:Frequency)] ] [ [bci:DeviceSpec] [] [oldssn:SensorDataSheet] [ ] [unchanged] ] [ [bci:Actuation] [] [ ] [sosa:Actuation] [new] ] [ [bci:Actuator] [] [ ] [sosa:Actuator] [new] ] [ [bci:ImpactedProperty] [] [ ] [sosa:ActuatableProperty] [new] ] [ [bci:ActuationResult] [] [ ] [sosa:Result] [new] ] [ [bci:ActuationTarget] [] [ ] [sosa:FeatureOfInterest] [new] ] [<th style="text-align: center;" colspan="5 Object Properties] [ [bci:hasModality] [] [oldssn:hasProperty] [ssn:hasProperty] [] ] [ [bci:isModalityOf] [] [oldssn:isPropertyOf] [ssn:isPropertyOf] [] ] [ [bci:aspectOfInterest] [] [oldssn:featureOfInterest] [sosa:hasFeatureOfInterest] [] ] [ [bci:madeRecord] [] [oldssn:madeObservation] [sosa:madeObservation] [] ] [ [bci:observedByDevice] [] [oldssn:observedBy] [sosa:madeBySensor] [] ] [ [bci:observedModality] [] [oldssn:observedProperty] [sosa:observedProperty] [] ] [ [bci:detects] [] [oldssn:detects] [ssn:detects] [] ] [ [bci:isProxyFor] [] [oldssn:isProxyFor] [ssn:isProxyFor] [] ] [ [bci:hasValue] [] [oldssn:hasValue] [sosa:hasResult] [ (deprecated to simplify the model)] ] [ [bci:isProducedByDevice] [] [oldssn:isProducedBy] [ ] [not defined in SOSA/SSN; deprecated. defined as the following role inclusion axiom: (bci:isObservationResultOf sosa:isResultOf) * (bci:observedByDevice sosa:madeBySensor) ⊆ (bci:isProducedByDevice)] ] [ [bci:observationResult] [] [oldssn:observationResult] [sosa:hasResult] [] ] [ [bci:forModality] [] [oldssn:forProperty] [ssn:forProperty] [] ] [ [bci:hasNonChannelData] [] [oldssn:hasMeasurementCapability] [ssn-system:hasSystemCapability] [oldssn:hasMeasurementCapability ssn-system:hasSystemCapability] ] [ [bci:observes] [] [oldssn:observes] [sosa:observes] [ in combination with the property-chain axioms] ] [ [bci:ofAspect] [] [oldssn:ofFeature] [ ] [not defined in SOSA/SSN; deprecated.] ] [ [bci:includesEvent] [] [DUL:includesEvent] [ ] [not used in SOSA/SSN; deprecated. A new property has been defined: ssn:wasOriginatedBy] ] ] ] Symbolic notation: [ [ [ [aligned to (subclass or sub-property of)] [] ] [ [equivalent concepts] [] ] [ [the new mapping implies a conceptual update] [] ] ] ] As for January 2018, due that all alignments were updated to SOSA/SSN core classes, the new mappings are based now on the <a title="Semantic Sensor Network Ontology | W3C Recommendation (19 October 2017): (6) Vertical Segmentation -- (6.1) Dolce-Ultralite Alignment Module Dolce-Ultralite Alignment Module of the Vertical Segmentation of [SSN]. Thus, the BCI ontology imports the ssn-dul definitions. As one of the ontologies ("<a title="OGC & W3C, On the usage of the SSN ontology (W3C Document): (3) Usage in ontologies (Producers) concept producers") that reuse SSN, BCI-O was selected as part of the analysis <a title="OGC & W3C, On the usage of the SSN ontology (W3C Document) on the usage of SSN.
Regarding Aspect and Modality: The importance and relationship between the concepts sosa:FeatureOfInterest (superclass of Aspect) and sosa:ObservableProperty (superclass of Modality) are shown and explained in ([oldSSN]): (*) (5.4.4 Agriculture Meteorology Sensor Network -- 5.4.4.1.1 Sensor selection), (*) (5.4.4 Agriculture Meteorology Sensor Network -- 5.4.4.2.2 CF (Climate and Forecast) ontologies), (*) (5.4.3 Wind sensor (WM30) -- 5.4.3.6 Wind Feature and properties), (*) (Wind Sensor example -- Wind Sensor example: Feature of interest). As a SSN domain application ontology, these concepts are defined and adjusted properly for describing the nature of BCI activities observations.
Regarding EEG concepts: This ontology EEG concepts: (*) EegModality. (*) EegNonChannel. (*) EegDevice. (*) EegRecord. (*) EegChannel. If necessary, BCI applications may define a set of restrictions and specialized connections (subproperties) for the relations among the EEG concepts.
Regarding the Procedure concept: [SSN] defines a general concept about Procedures, which encompasses any kind of Observations and Actuations. Thus, it fits properly the domain of BCI data capture activities. Following ontology engineering good practices, and given that there is not a specific description of Procedures for BCI data capture activities, this ontology does not define any new concept for them. Therefore, BCI-O applications that needs to model "Procedure" into their metadata definitions, should align directly to the concept sosa:Procedure. It is worth noting that this practice applies to any other high-level concepts that BCI-O applications might include into their vocabulary.
Regarding the treatment of measurement units: This ontology leaves open to BCI applications, the way how they should handle the semantic expressiveness level of measurement units. In general, there are two possible ways (based on their data requirements): (*) data type properties (with its previously-known units of measurements), implies that it's not necessary to incorporate into their ontology's definition a semantic structure to describe properly units of measurement. (*) For semantic-centric applications: it's necessary to incorporate into their ontology's definition a structure to describe properly units of measurement, depending on their required semantic level of expressiveness. The vast majority of BCI applications are heavily data-centric. Well-known measurement units for a wide range of metadata attributes are used in different specs (such as [XDF] and [ESS]), e.g., pixels, mm, degrees and microvolts. For them, defining a relevant data type property set without specifying measurement units is suffice. For BCI applications that require a proper semantic expressiveness level of measurement units, this ontology provides the following guideline: (*) The BCI concepts that are subject to be extended are those related to Device and Record (including the channeling spec definitions). (*) From the perspective of the BCI ontology alignment to the Stimulus-Sensor-Observation Ontology Design Pattern, the core SSN concepts that "map" to quantities (and, therefore, to units of measurement) are ssn-system:SystemCapability and ssn-system:SystemProperty. Thus, BCI applications should pay special attention to extend the semantic structure of the concepts: (*) Channel, and (*) NonChannel. (*) BCI applications should extend the BCI ontology based on the [oldSSN] guidelines, explained in: (*) (5.3.10 Data -- 5.3.10.2 How to attach a data value to a property?), (*) (5.3.13 Energy -- 5.3.13.3 How to represent a WSN node with information about its energy consumption), (*) (5.4.2 Smart product example -- 5.4.2.2 Sensor), (*) (5.4.2 Smart product example -- 5.4.2.3 Measurement capabilities), (*) (5.4.2 Smart product example -- 5.4.2.4 Observation), (*) (5.4.4 Agriculture Meteorology Sensor Network -- 5.4.4.1.3 Sensor view), (*) (5.4.4 Agriculture Meteorology Sensor Network -- 5.4.4.2.3 Units of measurement and quantity ontologies). (*) It's recommended to incorporate the semantic extensions through their alignment to a proper ontology for units of measurement. Recommended ontologies for units of measurement are: (*) QUDT - Quantities, Units, Dimensions and Data Types Ontologies, developed by the NExIOM project (NASA, TopQuadrant). (*) Ontology of units of Measure (OM): om-1.8.2, developed by a team of researchers at Wageningen UR (wurvoc.org).
Actuation: Automated Wheelchair: The following use case presents an example that depicts how to define related BCI-O actuation concepts. Wheelchair driving scenario: (*) Purpose: use an actuator capable to control a wheelchair based on the input from a BCI/EEG record (obtained directly from the subject's head). (*) Description: Alice is driving a wheelchair throughout a human interface composed of three major components: (*) An EEG sensor capable of reading brain signals. (*) A computing system capable to process and analyze (classify) the brain signals collected from the EEG sensor. (*) An actuator capable to control the wheelchair's movement (such as direction and acceleration) based on the input from (2). The actuator is a device that works in the following way: (*) The processed brain signals issue specific movement commands to the actuator, such as: (*) Command: "slow down" with: (*) Direction: go forward (no change in the direction). (*) Acceleration: -10.5 cm/s2 (change in the speed). (*) The actuator mechanism: (*) Implements the procedure (actuation) to control the wheelchair. (*) It triggers a series of steps aimed to change the wheelchair's state: to decelerate its wheels. The modeled BCI-O concepts involved in this scenario, excluding those from the observation component (except for EEG-Record and EEG-Device), are listed below: <ul style="list-style-type: square; (*) Subject (x1): "Alice" (*) Session (x1): "a situation where the observation (EEG recording) and actuation happened" (*) Activity (x1): "controlling the automated wheelchair" (*) Context (x1): "at home" (*) Context.Scene (x1): "specific indoors situation" (*) EegRecord (x1): "observation of the EEG record that serves as the input of the actuations" (*) EegDevice (x1): "EEG device that made the EEG recordings" (*) Command (x1): "slow down : (EEG record) -- actuators" (*) Actuator (x2): "the devices that peform the actuations" (*) Actuation (x2): "the procedures that change the state of the wheels via actuators" (*) ImpactedProperty (x2): "the speed of the wheels (their state)" (*) ActuationEvent (x2): "reduce the speed of wheels". (*) ActuationResult (x2): "slowing down" ("the effect of decelerating the wheels"). (*) ActuationTarget (x2): "rear wheels" (x2). (*) Context.Object (composite) (x1): "wheelchair". (*) Context.Method (x1): "deacceleration of a wheel". <DCMIType-StillImage>05-UserCase-Actuation.jpg</DCMIType-StillImage> An RDF file containing a graph corresponding to this example is available. <div class="example <div class="example-title marker Actuation: Automated Wheelchair</div> <pre class="hljs xml" aria-busy="false" aria-live="polite @prefix : http://example.org/data/ . @prefix rdf: http://www.w3.org/1999/02/22-rdf-syntax-ns# . @prefix rdfs: http://www.w3.org/2000/01/rdf-schema# . @prefix schema: http://schema.org/. @prefix time: http://www.w3.org/2006/time#. @prefix sosa: http://www.w3.org/ns/sosa/ . @prefix ssn: http://www.w3.org/ns/ssn/ . @prefix bci: https://w3id.org/BCI-ontology# . @prefix xsd: http://www.w3.org/2001/XMLSchema# . @prefix qudt-1-1: http://qudt.org/1.1/schema/qudt# . @prefix qudt-unit-1-1: http://qudt.org/1.1/vocab/unit# . @base http://example.org/data/ . # Alice performs a "controlling-wheelchair" activity "at home" (scene labeled as: "indoors-X3" #4). # The context scene indoors-X3 #4 has the wheelchair as part of its structure composition. Alice rdf:type bci:Subject . controlling-wheelchair rdf:type bci:Activity . context/at-home/6 rdf:type bci:Context ; bci:hasScene scene/indoors-X3/4 . scene/indoors-X3/4 rdf:type bci:Context.Scene ; bci:hasObject wheelchair . # The wheelchair has a left-rear-wheel (#47) and a right-rear-wheel (#39), which are the actuations targets. # All of them are context objects. wheelchair rdf:type bci:Context.Object ; bci:hasObject left-rear-wheel/47 ; # bci:ActuationTarget defined below bci:hasObject right-rear-wheel/39 . # bci:ActuationTarget defined below # The session is titled "an actuation example". # The session observes an EEG-Record and covers 2 actuations (one for each rear wheel). session/91 rdf:type bci:Session ; bci:hasTitle "an actuation example" ; bci:isSessionOf Alice, context/at-home/6 ; bci:hasActivity controlling-wheelchair ; bci:hasRecord EegRecord/46 ; bci:hasActuation actuation/62, actuation/63 . # Alice's EEG-Record is observed by EEG-device ctnp-A128 #5. # This is the input for the command to "slow down" #11 that initiates the execution of the actuators. # All the details about the recordings' data and results are not shown in this graph. EegRecord/46 rdf:type bci:EegRecord ; bci:observedByDevice ctnp-A128/5 ; bci:isInputFor cmd/slowDown/11 . # EegDevice ctnp-A128 #5 observes the EEG recordings of Alice. ctnp-A128/5 rdf:type bci:EegDevice ; bci:madeRecord EegRecord/46 . # The SlowDown command defines two associated values: # - direction: in this scenario, its value is "forward" (implies no change in this state's axis). # - acceleration: in this scenario, its value is -10.5 cm/s2 (implies a change in this state's axis). # The SlowDown command #11 gives to the actuators servo4WC-ABC #1 and #2, their entry point for execution, # based on the input received from the EEG recordings #46. :SlowDown rdfs:subClassOf bci:Command . :direction a owl:ObjectProperty ; schema:domainIncludes :SlowDown ; schema:rangeIncludes rdfs:Literal . :acceleration a owl:ObjectProperty ; schema:domainIncludes :SlowDown ; # bci:Command schema:domainIncludes :slowingDown ; # bci:ActuationResult schema:domainIncludes :accelerateWheel ; # bci:Context.Method schema:rangeIncludes qudt-1-1:QuantityValue . _:acceleration-value rdf:type qudt-1-1:QuantityValue ; qudt-1-1:numericValue "-10.5"^^xsd:double ; # deacceleration qudt-1-1:unit qudt-unit-1-1:CentimeterPerSecondSquared . cmd/slowDown/11 rdf:type :SlowDown ; :direction "forward" ; :acceleration _:acceleration-value ; bci:consumesInputFrom EegRecord/46 ; bci:isExecutedBy servo4WC-ABC/1, servo4WC-ABC/2 . # servo4WC-ABC #1 made actuation #62, and servo4WC-ABC #2 made actuation #63: # both execute the command to "slow down" #11. # The model says that: # - servo4WC-ABC/1 is designed to automatically change the speed of the right rear wheel. # - servo4WC-ABC/2 is designed to automatically change the speed of the left rear wheel. # Each actuator triggers an event to reduce the speed of the wheel that it is bound to. servo4WC-ABC/1 rdf:type bci:Actuator ; sosa:madeActuation actuation/62 ; ssn:forProperty right-rear-wheel/39#speed ; bci:triggers ae/reduceSpeed/81 . servo4WC-ABC/2 rdf:type bci:Actuator ; sosa:madeActuation actuation/63 ; ssn:forProperty left-rear-wheel/47#speed ; bci:triggers ae/reduceSpeed/82 . # The rear wheels are the actuation targets (for each correspondent actuation procedure). # Each wheel's speed state is an ImpactedProperty. The model allows to explicitly say that: # - left-rear-wheel/47#speed is a property of left-rear-wheel/47 # - right-rear-wheel/39#speed is a property of right-rear-wheel/39 left-rear-wheel/47 rdf:type bci:ActuationTarget ; ssn:hasProperty left-rear-wheel/47#speed . right-rear-wheel/39 rdf:type bci:ActuationTarget ; ssn:hasProperty right-rear-wheel/39#speed . left-rear-wheel/47#speed rdf:type bci:ImpactedProperty ; sosa:isActedOnBy actuation/63 . right-rear-wheel/39#speed rdf:type bci:ImpactedProperty ; sosa:isActedOnBy actuation/62 . # Actuation #62 acted on the state (speed) of right-rear-wheel #39, # and returned "slowing down" #788 as its associated result. # Actuation #63 acted on the state (speed) of left-rear-wheel #47, # and returned "slowing down" #789 as its associated result. # Each actuation has a timestamp for its associated result. actuation/62 rdf:type bci:Actuation ; sosa:actsOnProperty right-rear-wheel/39#speed ; sosa:actuationMadeBy servo4WC-ABC/1 ; sosa:hasResult ar/slowingDown/788 ; sosa:phenomenonTime _:actuation-time ; sosa:resultTime "2018-05-06T20:05:13+00:00"^^xsd:dateTimeStamp . actuation/63 rdf:type bci:Actuation ; sosa:actsOnProperty left-rear-wheel/47#speed ; sosa:actuationMadeBy servo4WC-ABC/2 ; sosa:hasResult ar/slowingDown/789 ; sosa:phenomenonTime _:actuation-time ; sosa:resultTime "2018-05-06T20:05:13+00:00"^^xsd:dateTimeStamp . # The time interval of the actuations: _:actuation-time rdf:type time:Interval ; time:hasBeginning [ rdf:type time:Instant ; time:inXSDDateTimeStamp "2018-05-06T20:05:12+58:00"^^xsd:dateTimeStamp ] ; time:hasEnd [ rdf:type time:Instant ; time:inXSDDateTimeStamp "2018-05-06T20:05:13+02:00"^^xsd:dateTimeStamp ] . # The slowingDown actuation result defines a value for the acceleration (change of speed): see above... # // :acceleration schema:domainIncludes :slowingDown ; // :slowingDown rdfs:subClassOf bci:ActuationResult . # The actuation results of "slowing down" #788 and #789, are defined in the following way: # * Both "slowing down" #788 and #789 are the actuation results of actuations #62 and #63 respectively. # * They have the associated values: # - change of speed: the deacceleration of 10.5 cm/s2. # * Both "slowing down" #788 and #789 involve the event of "reduce speed" #81 and #82 respectively. ar/slowingDown/788 rdf:type :slowingDown ; :acceleration _:acceleration-value ; bci:involves ae/reduceSpeed/81 . ar/slowingDown/789 rdf:type :slowingDown ; :acceleration _:acceleration-value ; bci:involves ae/reduceSpeed/82 . # The actuation events (#81 and #83) change the state of the actuation targets (the rear wheels), # throught the "accelerateWheel" (context method) deacceleration effectuation, which handles the acceleration-value. # The "accelerateWheel" context method defines a "parameter" value: see above... # // :acceleration schema:domainIncludes :accelerateWheel ; // :accelerateWheel rdfs:subClassOf bci:Context.Method . deacceleration rdf:type :accelerateWheel ; :acceleration _:acceleration-value . ae/reduceSpeed/81 rdf:type bci:ActuationEvent ; bci:effectuates deacceleration ; bci:changes right-rear-wheel/39 . ae/reduceSpeed/82 rdf:type bci:ActuationEvent ; bci:effectuates deacceleration ; bci:changes left-rear-wheel/47 . </pre></div>
Observation Context: Visually Evoked Potential (VEP) Virtual Environment Navigation for Glaucoma Patients: The following use case presents an example that depicts how to define related BCI-O observation and context concepts. Observation Context: Visually Evoked Potential (VEP) Virtual Environment Navigation for Glaucoma Patients: (*) Purpose: to present flickering visual stimuli on a subset of objects in a virtual environment and determining whether the subject suffers vision loss at a particular region based on the EEG responses that he has on those stimuli. (*) Description: Bob is navigating towards a pillar (target) located at the center of the field whilst avoiding obstacles randomly placed on the path in-between. A subject suffering from glaucoma disease may not be able to see objects at a certain sector of his peripheral vision and hence might unknowingly hit upon those unseen objects. The major components in this scenario are: (*) An EEG sensor capable of capturing brain signals using certain electrode placement. (*) A computing system capable to process and analyze (classify) the brain signals collected from the EEG sensor. (*) A gameplay recording mechanism to record the pathway that the subject took to reach the target. (*) Event generation mechanism whenever the subject interacts with an object in the field or whenever a flickering stimulus is presented on his field of vision. The context (virtual environment) consists of the following objects: (*) An assortment of objects with different sizes typically found on a forest is randomly placed throughout the environment, e.g. stones, trees and pit holes. (*) A pillar (target) is placed in the middle of the field and flicker at an unnoticeably high frequency, e.g. 45 Hz. (*) A selection of objects will flicker at a high frequency that is lesser than the target frequency, e.g. 30 Hz - 40 Hz. (*) High flickering frequency is necessary in order to minimize uneasiness felt by the subjects'. Several events are generated whenever: (*) The subject hit upon an object (event data: location of the object). (*) The flickering object appears within the subject's field of vision (event data: flickering frequency). (*) The subject reaches the target. A 3D animation of this virtual environment is available. Experiment flow: (*) The objective is for the subject to walk towards the middle of the field without hitting on any objects placed on the fields. (*) Events are triggered and recorded whenever certain conditions are met. (*) A pathway that the subject took to reach the target, the location of the objects that he hit (if any), is recorded upon. Analysis mechanism: (*) These flickering objects will serve as an indicator as to whether the subject actually sees them within his peripheral vision field. They will induce an EEG response matching the flickering frequency if a subject actually sees them. (*) The event markers will aid the analyst to do selective epoch analysis based on the interests of the analysis, e.g. epochs in which the flickering objects is within a subject's field of vision, or epochs in which he hit an object. <DCMIType-StillImage>05-UserCase-Observation-Context.jpg</DCMIType-StillImage> An RDF file containing a graph corresponding to this example is available. <div class="example <div class="example-title marker Observation Context: Visually Evoked Potential (VEP) Virtual Environment Navigation for Glaucoma Patients</div> <pre class="hljs xml" aria-busy="false" aria-live="polite @prefix : http://example.org/data/ . @prefix rdf: http://www.w3.org/1999/02/22-rdf-syntax-ns# . @prefix rdfs: http://www.w3.org/2000/01/rdf-schema# . @prefix schema: http://schema.org/. @prefix time: http://www.w3.org/2006/time#. @prefix sosa: http://www.w3.org/ns/sosa/ . @prefix ssn: http://www.w3.org/ns/ssn/ . @prefix bci: https://w3id.org/BCI-ontology# . @prefix xsd: http://www.w3.org/2001/XMLSchema# . @prefix qudt-1-1: http://qudt.org/1.1/schema/qudt# . @prefix qudt-unit-1-1: http://qudt.org/1.1/vocab/unit# . @base http://example.org/data/ . # The 3 main sets of definitions: # (i) Virtual Environment (Context): objects, behaviour, events (stimuli). # (ii) Subject, Actions # (iii) Session, EEG Record, EEG sensor (channeling scheme). # Bob navegates in (interacts with) a virtual environment (context), labeled as ven4gp #1. # The context scene forest #1 depicts the architectural design of the objects' layout as part of its structural composition. # The session's activity is labeled "glaucoma-tracking #1". Bob rdf:type bci:Subject . glaucoma-tracking/1 rdf:type bci:Activity . context/ven4gp/1 rdf:type bci:Context ; bci:hasScene scene/forest/1 . scene/forest/1 rdf:type bci:Context.Scene ; bci:hasObject # Objects without flickering behavior: static-stone/1, static-stone/2, static-stone/3, static-stone/4, # ... associate here all the "static stones" in the scene. static-tree/1, static-tree/2, static-tree/3, static-tree/4, # ... associate here all the "static trees" in the scene. static-pit-hole/1, static-pit-hole/2, static-pit-hole/3, # ... associate here all the "static pit-holes" in the scene. # Objects with flickering behavior: flickering-stone/1, flickering-stone/2, flickering-stone/3, # ... associate here all the "flickering stones" in the scene. flickering-tree/1, flickering-tree/2, flickering-tree/3, flickering-tree/4, # ... associate here all the "flickering trees" in the scene. flickering-pit-hole/1, flickering-pit-hole/2, flickering-pit-hole/3, # ... associate here all the "flickering pit-holes" in the scene. # Target: pillar target/pillar/1 . # Below (1~9) are the main structural and functional component definitions for the virtual environment. # (1) The LocatedObject context object: defines 3 associated values that represents the object's location (coordinates). # - X-coor: coordinate in the X axis. # - Y-coor: coordinate in the Y axis. # - Z-coor: coordinate in the Z axis. # This class gives the "location" notion to all the participant objects in the scene. :LocatedObject rdfs:subClassOf bci:Context.Object . :X-coor a owl:ObjectProperty ; schema:domainIncludes :LocatedObject ; schema:rangeIncludes qudt-1-1:QuantityValue . :Y-coor a owl:ObjectProperty ; schema:domainIncludes :LocatedObject ; schema:rangeIncludes qudt-1-1:QuantityValue . :Z-coor a owl:ObjectProperty ; schema:domainIncludes :LocatedObject ; schema:rangeIncludes qudt-1-1:QuantityValue . # (2) The context objects that defines a flickering (and non-flickering) behavior: # - NonFlickeringObject: doesn't have any flickering mechanism (context method). # - LowerFrequencyFlickeringObject: has a constant flickering mechanism (context method) --30Hz--. # - TargetFrequencyFlickeringObject: has a constant flickering mechanism (context method) --45Hz--. :NonFlickeringObject rdfs:subClassOf bci:Context.Object . :LowerFrequencyFlickeringObject rdfs:subClassOf bci:Context.Object . :TargetFrequencyFlickeringObject rdfs:subClassOf bci:Context.Object . # (3) The located context object types of different nature that defines the notions of stones, trees and pit holes. :Stone rdfs:subClassOf :LocatedObject . :Tree rdfs:subClassOf :LocatedObject . :Pit-hole rdfs:subClassOf :LocatedObject . # (4) The classes of static objects that don't flicker. :StaticStone rdfs:subClassOf :NonFlickeringObject, :Stone . :StaticTree rdfs:subClassOf :NonFlickeringObject, :Tree . :StaticPit-hole rdfs:subClassOf :NonFlickeringObject, :Pit-hole . # (5) The classes of flickering objects that are not the target (Pillar). :FlickeringStone rdfs:subClassOf :LowerFrequencyFlickeringObject, :Stone . :FlickeringTree rdfs:subClassOf :LowerFrequencyFlickeringObject, :Tree . :FlickeringPit-hole rdfs:subClassOf :LowerFrequencyFlickeringObject, :Pit-hole . # (6) The target (Pillar). :Pillar rdfs:subClassOf :TargetFrequencyFlickeringObject, :LocatedObject . target/pillar/1 rdf:type :Pillar . # (7) The flickering mechanisms (types of context methods): # - flickeringAtLowerFrequency: a constant flickering mechanism of 30 Hz. # - flickeringAtTargetFrequency: a constant flickering mechanism of 45 Hz. # Both are bound to their correspondent objects. :hasFrequency a owl:ObjectProperty ; schema:domainIncludes bci:Context.Method ; schema:rangeIncludes qudt-1-1:QuantityValue . _:lower-freq-value rdf:type qudt-1-1:QuantityValue ; qudt-1-1:numericValue "30.0"^^xsd:double ; qudt-1-1:unit qudt-unit-1-1:Hertz . _:target-freq-value rdf:type qudt-1-1:QuantityValue ; qudt-1-1:numericValue "45.0"^^xsd:double ; qudt-1-1:unit qudt-unit-1-1:Hertz . flickeringAtLowerFrequency rdf:type bci:Context.Method ; bci:definesBehaviorOf :LowerFrequencyFlickeringObject ; :hasFrequency _:lower-freq-value . # a constant value flickeringAtTargetFrequency rdf:type bci:Context.Method ; bci:definesBehaviorOf :TargetFrequencyFlickeringObject ; :hasFrequency _:target-freq-value . # a constant value # (8) The stimuli events notion associated to the flickering methods. flickeringEvent rdf:type bci:Context.Event ; bci:effectuates flickeringAtLowerFrequency, flickeringAtTargetFrequency . # (9) The subject events (actions) and capability. walk rdf:type bci:Context.Capability . Bob bci:canPerform walk . :Action.HitObject rdfs:subClassOf bci:Action . :Action.DetectFlickeringObject rdfs:subClassOf bci:Action . :Action.ReachTarget rdfs:subClassOf bci:Action . # Examples on how Bob issues his related actions: # Bob bci:issues action/hit-object/344 # Bob bci:issues action/detect-flickering/96 # Below is an excerpt of the definitions for the assortment of objects that participate in the scene: static-stone/1 rdf:type :StaticStone ; :X-coor [ rdf:type qudt-1-1:QuantityValue ; qudt-1-1:numericValue "6.75"^^xsd:double ; qudt-1-1:unit qudt-unit-1-1:Meter ] ; :Y-coor [ rdf:type qudt-1-1:QuantityValue ; qudt-1-1:numericValue "2.15"^^xsd:double ; qudt-1-1:unit qudt-unit-1-1:Meter ] ; :Z-coor [ rdf:type qudt-1-1:QuantityValue ; qudt-1-1:numericValue "0.0"^^xsd:double ; qudt-1-1:unit qudt-unit-1-1:Meter ] . static-stone/2 rdf:type :StaticStone . # + X, Y, Z coordinates ; # static-stone/3 ... , static-stone/4 ... static-tree/1 rdf:type :StaticTree . # + X, Y, Z coordinates ; # static-tree/2 ... , static-tree/3 ... , static-tree/4 ... static-pit-hole/1 rdf:type :StaticPit-hole . # + X, Y, Z coordinates ; # static-pit-hole/2 ... , static-pit-hole/3 ... flickering-stone/1 rdf:type :FlickeringStone ; :X-coor [ rdf:type qudt-1-1:QuantityValue ; qudt-1-1:numericValue "5.05"^^xsd:double ; qudt-1-1:unit qudt-unit-1-1:Meter ] ; :Y-coor [ rdf:type qudt-1-1:QuantityValue ; qudt-1-1:numericValue "12.37"^^xsd:double ; qudt-1-1:unit qudt-unit-1-1:Meter ] ; :Z-coor [ rdf:type qudt-1-1:QuantityValue ; qudt-1-1:numericValue "0.0"^^xsd:double ; qudt-1-1:unit qudt-unit-1-1:Meter ] . flickering-stone/2 rdf:type :FlickeringStone . # + X, Y, Z coordinates ; # flickering-stone/3 ... flickering-tree/1 rdf:type :FlickeringTree . # + X, Y, Z coordinates ; # flickering-tree/2, flickering-tree/3 ... flickering-pit-hole/1 rdf:type :FlickeringPit-hole . # + X, Y, Z coordinates ; # flickering-pit-hole/2 ... , flickering-pit-hole/3 ... target/pillar/1 rdf:type :Pillar . # + X, Y, Z coordinates ; location: in the middle of the field. # The session observes an EEG-Record with its related settings. session/9 rdf:type bci:Session ; bci:hasTitle "Visually Evoked Potential (VEP) Virtual Environment Navigation for Glaucoma Patients" ; bci:isSessionOf Bob, context/ven4gp/1 ; bci:hasActivity glaucoma-tracking/1 ; bci:hasRecord EegRecord/18 . # Bob's EEG-Record is observed by the EEG-device eeg-dev #3. # The EEG observation is associated to the following metadata: # - the data recordings (result: RecordedData). # - the channeling schema used to collect the brain signals (RecordChannelingSpec). # - a time interval for its duration. # - a timestamp for its associated result. EegRecord/18 rdf:type bci:EegRecord ; bci:observedByDevice eeg-dev/3 ; bci:hasRecordChannelingSpec channeling-spec/18 ; bci:observationResult data-file/2 ; bci:aspectOfInterest vision-field-sensitivity ; bci:observedModality mfSSVEP ; ssn:wasOriginatedBy flickeringAtLowerFrequency, flickeringAtTargetFrequency ; sosa:phenomenonTime _:observation-time ; sosa:resultTime "2018-05-28T16:42:51+00:00"^^xsd:dateTimeStamp . # The time interval of the observation: _:observation-time rdf:type time:Interval ; time:hasBeginning [ rdf:type time:Instant ; time:inXSDDateTimeStamp "2018-05-28T16:40:08+00:00"^^xsd:dateTimeStamp ] ; time:hasEnd [ rdf:type time:Instant ; time:inXSDDateTimeStamp "2018-05-28T16:42:48+00:00"^^xsd:dateTimeStamp ] . # The core related descriptions about the BCI descriptive features are shown below. # EegDevice eeg-dev #3 observes the EEG recordings of Bob. eeg-dev/3 rdf:type bci:EegDevice ; bci:observes mfSSVEP ; bci:madeRecord EegRecord/18 . # The data recordings data-file #2 is the result of the observation. data-file/2 rdf:type bci:RecordedData ; bci:isProducedByDevice eeg-dev/3 . # The channeling scheme spec used in the observation. channeling-spec/18 rdf:type bci:RecordChannelingSpec ; bci:hasChannelData eeg-channel/1, eeg-channel/2, eeg-channel/3 . # ... associate here all the channels set for the recording. # The channels definitions used in the recordings. eeg-channel/1 rdf:type bci:EegChannel . # ... insert in here the channel's settings. eeg-channel/2 rdf:type bci:EegChannel . # ... insert in here the channel's settings. eeg-channel/3 rdf:type bci:EegChannel . # ... insert in here the channel's settings. # ... for all the defined channels. # The aspect and modality of the observation. vision-field-sensitivity rdf:type bci:NeurologicalAspect ; bci:hasModality eeg-dev/3 . mfSSVEP rdf:type bci:EegModality ; bci:hasChannelingSpec channeling-spec/18 . </pre></div>
BCI Ontology
CerebraTek Pod ontology (for an mfSSVEP visual stimuli pattern): As one of the earliest direct applications of BCI-O, this ontology specifies the relevant metadata vocabulary for BCI data capture activities using the CerebraTek Pod devices applied to glaucoma diagnostics using mfSSVEP (Steady-State Visually Evoked Potential with Vision Field Sensitivity). Its spec is published in http://bci.pet.cs.nctu.edu.tw/ontology?cerebratek_nupod.owl Its representational RDF graph, along with its alignments to BCI-O, is depicted in the following model: <DCMIType-StillImage>01-ctnp-RDF-graph-diagram.png</DCMIType-StillImage>
ESS+HED Standards Ontology for BCI-O: As the first BCI-O extension for the industry, this ontology specifies the relevant metadata vocabulary for BCI data capture activities for the ESS+HED Standards. Its main purpose is to provide a simple and compatible BCI-O based ontology for the ESS+HED Standards. Its spec is published in <a title="ESS+HED Standards Ontology for BCI-O http://bci.pet.cs.nctu.edu.tw/ontology?ESS_HED.owl Its representational RDF graph, along with its alignments to BCI-O, is depicted in the following model: <DCMIType-StillImage>01-esshed-RDF-graph-diagram.png</DCMIType-StillImage>
https://w3id.org/BCI-ontology#
W3C/OGC Spatial Data on the Web Working Group
The range of skos:altLabel is the class of RDF plain literals.
skos:prefLabel, skos:altLabel and skos:hiddenLabel are pairwise disjoint properties.
alternative label
An alternative lexical label for a resource.
Acronyms, abbreviations, spelling variants, and irregular plural/singular forms may be included among the alternative labels for a concept. Mis-spelled terms are normally included as hidden labels (see skos:hiddenLabel).
change note
A note about a modification to a concept.
definition
A statement or formal explanation of the meaning of a concept.
editorial note
A note for an editor, translator or maintainer of the vocabulary.
example
An example of the use of a concept.
The range of skos:hiddenLabel is the class of RDF plain literals.
skos:prefLabel, skos:altLabel and skos:hiddenLabel are pairwise disjoint properties.
hidden label
A lexical label for a resource that should be hidden when generating visual displays of the resource, but should still be accessible to free text search operations.
history note
A note about the past state/use/meaning of a concept.
note
A general note, for any purpose.
This property may be used directly, or as a super-property for more specific note types.
A resource has no more than one value of skos:prefLabel per language tag, and no more than one value of skos:prefLabel without language tag.
The range of skos:prefLabel is the class of RDF plain literals.
skos:prefLabel, skos:altLabel and skos:hiddenLabel are pairwise
disjoint properties.
preferred label
The preferred lexical label for a resource, in a given language.
scope note
A note that helps to clarify the meaning and/or the use of a concept.
The relation holding between any Agent, and a SocialAgent. In principle, a SocialAgent requires at least one PhysicalAgent in order to act, but this dependency can be 'delegated'; e.g. a university can be acted for by a department, which on its turm is acted for by physical agents.
acts for
agisce per
The relation holding between a PhysicalAgent and a SocialAgent. In principle, a SocialAgent requires at least one PhysicalAgent in order to act, but this dependency can be 'delegated', e.g. a university can be acted for by a department, which is acted for by physical agents. AKA isActedBy
acts through
agisce mediante
A catch-all object property, useful for alignment and querying purposes.
It is declared as both transitive and symmetric, in order to reason an a maximal closure of associations between individuals.
associatedWith
A relation between concepts and collections, where a Concept is said to characterize a Collection; it corresponds to a link between the (reified) intensional and extensional interpretations of a _proper subset of_ a (reified) class. This is different from covers, because it refers to an interpretation the entire reified class.
E.g. the collection of vintage saxophones is characterized by the Concept 'manufactured by hand', while it gets covered by the Concept 'Saxophone' with the Parameter 'Vintage'.
caratterizza
characterizes
A relation between a Concept and an Entity, e.g. the Role 'student' classifies a Person 'John'.
classifica
classifies
A relation stating that an Agent is internally representing a SocialObject: situations, descriptions, concepts, etc. E.g., 'John believes in the conspiracy theory'; 'Niels Bohr created the solar-system metaphor for the atomic theory'; 'Jacques assumes all swans are white'; 'the task force members share the attack plan'.
Conceptualizations can be distinguished into different forms, primarily based on the type of SocialObject that is conceptualized. Descriptions and concepts can be 'assumed', situations can be 'believed' or 'known', plans can be 'adopted', etc. (see ontology: http://www.ontologydesignpatterns.org/ont/dul/Conceptualization.owl.
conceptualizes
concettualizza
A relation between an InformationRealization and a Description, e.g. 'the printout of the Italian Constitution concretelyExpresses the Italian Constitution'. It should be supplied also with a rule stating that the InformationRealization realizes an InformationObject that expresses the Description
concretely expresses
esprime concretamente
A relation between two objects participating in a same Event; e.g., 'Vitas and Jimmy are playing tennis'.
co-participates with
copartecipa con
A relation between concepts and collections, where a Concept is said to cover a Collection; it corresponds to a link between the (reified) intensional and extensional interpretations of a (reified) class.
E.g. the collection of vintage saxophones is covered by the Concept 'Saxophone' with the Parameter 'Vintage'.
covers
ricopre
A relation between a Description and a Concept, e.g. a Workflow for a governmental Organization defines the Role 'officer', or 'the Italian Traffic Law defines the role Vehicle'.
defines
definisce
A relation between a description and a role, e.g. the recipe for a cake defines the role 'ingredient'.
defines role
definisce il ruolo
A relation between a description and a task, e.g. the recipe for a cake defines the task 'boil'.
defines task
definisce il task
The relation between a Description and an Entity : a Description gives a unity to a Collection of parts (the components), or constituents, by assigning a Role to each of them in the context of a whole Object (the system).
A same Entity can be given different descriptions, for example, an old cradle can be given a unifying Description based on the original aesthetic design, the functionality it was built for, or a new aesthetic functionality in which it can be used as a flower pot.
describes
descrive
The intransitive follows relation. For example, Wednesday directly precedes Thursday. Directness of precedence depends on the designer conceptualization.
directly follows
segue direttamente
The intransitive precedes relation. For example, Monday directly precedes Tuesday. Directness of precedence depends on the designer conceptualization.
directly precedes
precede direttamente
A relation between an action and a task, e.g. 'putting some water in a pot and putting the pot on a fire until the water starts bubbling' executes the task 'boiling'.
esegue il task
executes task
A partial order relation that holds between descriptions. It represents the proper part relation between a description and another description featuring the same properties as the former, with at least one additional one.
Descriptions can be expanded either by adding other descriptions as parts, or by refining concepts that are used by them.
An 'intention' to expand must be present (unless purely formal theories are considered, but even in this case a criterion of relevance is usually active).
espande
expands
A relation between an InformationObject and a 'meaning', generalized here as a 'SocialObject'. For example: 'A Beehive is a structure in which bees are kept, typically in the form of a dome or box.' (Oxford dictionary)'; 'the term Beehive expresses the concept Beehive in my apiculture ontology'.
The intuition for 'meaning' is intended to be very broad. A separate, large comment is included for those who want to investigate more on what kind of meaning can be represented in what form.
This is a large comment field for those who want to investigate the different uses of the 'expresses' relation for modeling different approaches to meaning characterization and modeling.
For example, in all these cases, some aspect of meaning is involved:
- Beehive means "a structure in which bees are kept, typically in the form of a dome or box." (Oxford dictionary)
- 'Beehive' is a synonym in noun synset 09218159 "beehive|hive" (WordNet)
- 'the term Beehive can be interpreted as the fact of 'being a beehive', i.e. a relation that holds for concepts such as Bee, Honey, Hosting, etc.'
- 'the text of Italian apiculture regulation expresses a rule by which beehives should be kept at least one kilometer away from inhabited areas'
- 'the term Beehive expresses the concept Beehive'
- ''Beehive' for apiculturists does not express the same meaning as for, say, fishermen'
- 'Your meaning of 'Beautiful' does not seem to fit mine'
- ''Beehive' is formally interpreted as the set of all beehives'
- 'from the term 'Beehive', we can build a vector space of statistically significant cooccurring terms in the documents that contain it'
- the lexeme 'Belly' expresses the role 'Body_Part' in the frame 'ObservableBodyParts' (FrameNet)
As the examples suggest, the 'meaning of meaning' is dependent on the background approach/theory that one assumes. One can hardly make a summary of the too many approaches and theories of meaning, therefore this relation is maybe the most controversial and difficult to explain; normally, in such cases it would be better to give up formalizing.
However, the usefulness of having a 'semantic abstraction' in modeling information objects is so high (e.g. for the semantic web, interoperability, reengineering, etc.), that we accept this challenging task, although without taking any particular position in the debate.
We provide here some examples, which we want to generalize upon when using the 'expresses' relation to model semantic aspects of social reality.
In the most common approach, lexicographers that write dictionaries, glossaries, etc. assume that the meaning of a term is a paraphrase (or 'gloss', or 'definition').
Another approach is provided by concept schemes like thesauri and lexicons, which assume that the meaning of a term is a 'concept', encoded as a 'lemma', 'synset', or 'descriptor'.
Still another approach is that of psychologists and cognitive scientists, which often assume that the meaning of an information object is a concept encoded in the mind or cognitive system of an agent.
A radically different approach is taken by social scientists and semioticians, who usually assume that meanings of an information object are spread across the communication practices in which members of a community use that object.
Another approach that tackles the distributed nature of meaning is assumed by geometrical models of semantics, which assume that the meaning of an InformationObject (e.g. a word) results from the set of informational contexts (e.g. within texts) in which that object is used similarly.
The logical approach to meaning is still different, since it assumes that the meaning of e.g. a term is equivalent to the set of individuals that the term can be applied to; for example, the meaning of 'Ali' is e.g. an individual person called Ali, the meaning of 'Airplane' is e.g. the set of airplanes, etc.
Finally, an approach taken by structuralist linguistics and frame semantics is that a meaning is the relational context in which an information object can be applied; for example, a meaning of 'Airplane' is situated e.g. in the context ('frame') of passenger airline flights.
These different approaches are not necessarily conflicting, and they mostly talk about different aspects of so-called 'semantics'. They can be summarized and modelled within DOLCE-Ultralite as follows (notice that such list is far from exhaustive):
(1) Informal meaning (as for linguistic or commonsense semantics: a distinction is assumed between (informal) meaning and reference; see isAbout for an alternative pattern on reference)
- Paraphrase meaning (as for lexicographic semantics). Here it is modelled as the expresses relation between instances of InformationObject and different instances of InformationObject that act as 'paraphrases'
- Conceptual meaning (as for 'concept scheme' semantics). Here it is modelled as the expresses relation between instances of InformationObject and instances of Concept
- Relational meaning (as for frame semantics). Here it is modelled as the expresses relation between instances of InformationObject and instances of Description
- Cognitive meaning (as for 'psychological' semantics). Here it is modelled as the expresses relation between any instance of InformationObject and any different instance of InformationObject that isRealizedBy a mental, cognitive or neural state (depending on which theory of mind is assumed). Such states can be considered here as instances of Process (occurring in the mind, cognitive system, or neural system of an agent)
- Cultural meaning (as for 'social science' semantics). Here it is modelled as the expresses relation between instances of InformationObject and instances of SocialObject (institutions, cultural paradigms, norms, social practices, etc.)
- Distributional meaning (as for geometrical models of meaning). Here it is modelled as the expresses relation between any instance of InformationObject and any different instance of InformationObject that isFormallyRepresentedIn some (geometrical) Region (e.g. a vector space)
(2) Formal meaning (as for logic and formal semantics: no distinction is assumed between informal meaning and reference, therefore between 'expresses' and 'isAbout', which can be used interchangeably)
- Object-level formal meaning (as in the traditional first-order logic semantics). Here it is modelled as the expresses relation between an instance of InformationObject and an instance of Collection that isGroundingFor (in most cases) a Set; isGroundingFor is defined in the ontology: http://www.ontologydesignpatterns.org/ont/dul/IOLite.owl
- Modal formal meaning (as in possible-world semantics). Here it is modelled as the expresses relation between an instance of InformationObject and an instance of Collection that isGroundingFor a Set, and which isPartOf some different instance of Collection that isGroundingFor a PossibleWorld
This is only a first step to provide a framework, in which one can model different aspects of meaning. A more developed ontology should approach the problem of integrating the different uses of 'expresses', so that different theories, resources, methods can interoperate.
esprime
expresses
A relation between an InformationObject and a Concept , e.g. the term "dog" expresses the Concept "dog". For expressing a relational meaning, see the more general object property: expresses
esprime il concetto
expresses concept
Generic distance relation between any Entity(s). E.g. Rome is far from Beijing, astronomy is far from necromancy.
far from
A relation between entities, expressing a 'sequence' schema.
E.g. 'year 2000 follows 1999', 'preparing coffee' follows 'deciding what coffee to use', 'II World War follows I World War', etc.
It can be used between tasks, processes or time intervals, and subproperties would fit best in order to distinguish the different uses.
follows
segue
A relation to encode either formal or informal characterizations of 'boundaries' common to two different entities: an Event that ends when another begins, two abstract regions that have a common topological boundary, two objects that are said to be 'in contact' from a commonsense perspective, etc.
has common boundary
The hasProperPart relation without transitivity, holding between an Object (the system) and another (the component), and assuming a Design that structures the Object.
ha componente
has component
'Constituency' depends on some layering of the world described by the ontology. For example, scientific granularities (e.g. body-organ-tissue-cell) or ontological 'strata' (e.g. social-mental-biological-physical) are typical layerings.
Intuitively, a constituent is a part belonging to a lower layer. Since layering is actually a partition of the world described by the ontology, constituents are not properly classified as parts, although this kinship can be intuitive for common sense.
A desirable advantage of this distinction is that we are able to talk e.g. of physical constituents of non-physical objects (e.g. systems), while this is not possible in terms of parts.
Example of are the persons constituting a social system, the molecules constituting a person, the atoms constituting a river, etc.
In all these examples, we notice a typical discontinuity between the constituted and the constituent object: e.g. a social system is conceptualized at a different layer from the persons that constitute it, a person is conceptualized at a different layer from the molecules that constitute them, and a river is conceptualized at a different layer from the atoms that constitute it.
ha costituente
has constituent
A relation between parameters and entities. It allows to assert generic constraints (encoded as parameters), e.g. MinimumAgeForDriving isConstraintFor John (where John is a legal subject under the TrafficLaw).
The intended semantics (not expressible in OWL) is that a Parameter isParameterFor a Concept that classifies an Entity; moreover, it entails that a Parameter parametrizes a Region that isRegionFor that Entity.
ha vincolo
has constraint
Aldo Gangemi
2024-02-04T11:18:50Z
A generic, relative spatial location, holding between any entities. E.g. 'the cat is on the mat', 'Omar is in Samarcanda', 'the wound is close to the femural artery'.
For 'absolute' locations, see SpaceRegion
ha localizzazione
has location
A relation between collections and entities, e.g. 'my collection of saxophones includes an old Adolphe Sax original alto' (i.e. my collection has member an Adolphe Sax alto).
ha membro
has member
A Concept can have a Parameter that constrains the attributes that a classified Entity can have in a certain Situation, e.g. a 4WheelDriver Role definedIn the ItalianTrafficLaw has a MinimumAge parameter on the Amount 16.
ha parametro
has parameter
A schematic relation between any entities, e.g. 'the human body has a brain as part', '20th century contains year 1923', 'World War II includes the Pearl Harbour event'.
Parthood should assume the basic properties of mereology: transitivity, antisymmetry, and reflexivity (propert Parthood of course misses reflexivity).
However, antisymmetry is not supported in OWL2 explicitly, therefore DUL has to adopt one of two patterns:
1) dropping asymmetry axioms, while granting reflexivity: this means that symmetry is not enforced, but permitted for the case of reflexivity. Of course, in this way we cannot prevent symmetric usages of hasPart;
2) dropping the reflexivity axiom, and enforce asymmetry: in this case, we would prevent all symmetric usages, but we loose the possibility of enforcing reflexivity, which is commonsensical in parthood.
In DUL, we adopt pattern #1 for partOf, and pattern #2 for properPartOf, which seems a good approximation: due to the lack of inheritance of property characteristics, each asymmetric hasPropertPart assertion would also be a reflexive hasPart assertion (reflexive reduction design pattern).
Subproperties and restrictions can be used to specialize hasPart for objects, events, etc.
ha parte
has part
A relation between an object and a process, e.g. 'John took part in the discussion', 'a large mass of snow fell during the avalanche', or 'a cook, some sugar, flour, etc. are all present in the cooking of a cake'.
ha come partecipante
has participant
Direct succession applied to situations.
E.g., 'A postcondition of our Plan is to have things settled'.
ha postcondizione
has postcondition
Direct precedence applied to situations.
E.g., 'A precondition to declare war against a foreign country is claiming to find nuclear weapons in it'.
ha precondizione
has precondition
Asymmetric (so including irreflexive) parthood.
has proper part
A relation between entities and qualities, e.g. 'Dmitri's skin is yellowish'.
ha qualitÃ
has quality
A relation between entities and regions, e.g. 'the number of wheels of that truck is 12', 'the time of the experiment is August 9th, 2004', 'the whale has been localized at 34 degrees E, 20 degrees S'.
ha attributo
has region
A relation between an object and a role, e.g. the person 'John' has role 'student'.
ha ruolo
has role
A relation between entities and situations, e.g. 'this morning I've prepared my coffee with a new fantastic Arabica', i.e.: (an amount of) a new fantastic Arabica hasSetting the preparation of my coffee this morning.
has setting
è nel contesto di
A relation between roles and tasks, e.g. 'students have the duty of giving exams' (i.e. the Role 'student' hasTask the Task 'giving exams').
ha come obiettivo
has task
The generic relation between events and time intervals.
ha intervallo temporale
has time interval
A relation between situations and actions, e.g. 'this morning I've prepared my coffee and had my fingers burnt' (i.e.: the preparation of my coffee this morning included a burning of my fingers).
include azione
includes action
A relation between situations and persons, e.g. 'this morning I've prepared my coffee and had my fingers burnt' (i.e.: the preparation of my coffee this morning included me).
include l'agente
includes agent
A relation between situations and events, e.g. 'this morning I've prepared my coffee and had my fingers burnt' (i.e.: the preparation of my coffee this morning included a burning of my fingers).
include l'evento
includes event
A relation between situations and objects, e.g. 'this morning I've prepared my coffee and had my fingers burnt' (i.e.: the preparation of my coffee this morning included me).
include l'oggetto
includes object
Aldo Gangemi
2021-04-03T14:11:09Z
A relation between situations and time intervals, e.g. 'this morning I've prepared my coffee and had my fingers burnt' (i.e.: preparing my coffee was held this morning). A data value attached to the time interval typically complements this modelling pattern.
include tempo
includes time
Aldo Gangemi
2021-04-03T14:11:01Z
A relation between a Description and a SocialAgent, e.g. a Constitutional Charter introduces the SocialAgent 'PresidentOfRepublic'.
introduce
introduces
Agent participation.
coinvolge agente
involves agent
A relation between an information object and an Entity (including information objects). It can be used to talk about entities that are references of proper nouns: the proper noun 'Leonardo da Vinci' isAbout the Person Leonardo da Vinci; as well as to talk about sets of entities that can be described by a common noun: the common noun 'person' isAbout the set of all persons in a domain of discourse, which can be represented in DOLCE-Ultralite as an individual of the class: dul:Collection.
A specific sentence may use common nouns with either a singular or plural reference, or it can even refer to all possible references (e.g. in a lexicographic definition): all those uses are kinds of aboutness.
The isAbout relation is sometimes considered as reflexive, however this is semiotically inaccurate, because information can be about itself ('de dicto' usage, as in 'John is four character long'), but it is typically about something else ('de re' usage, as in 'John loves Mary').
If a reflexivity exists in general, it rather concerns its realisation, which is always associated with an event, e.g. an utterance, which makes the information denoting itself, besides its aboutness. This is implemented in DUL with the dul:realizesSelfInformation property, which is used with local reflexivity in the dul:InformationRealization class.
is about
si riferisce a
is action included in
è un'azione nel contesto di
is agent included in
è un agente nel contesto di
Agent participation.
is agent involved in
è un agente coinvolto in
is characterized by
is characterized by {@en-us}
è caratterizzato da
A relation between a Concept and an Entity, e.g. 'John is considered a typical rude man'; your last concert constitutes the achievement of a lifetime; '20-year-old means she's mature enough'.
is classified by
è classificato da
The asymmetric isProperPartOf relation without transitivity, holding between an Object (the system) and another (the component), and assuming a Design that structures the Object.
is component of
è componente di
A relation between an InformationObject and a Concept , e.g. the term "dog" expresses the Concept "dog". For expressing a relational meaning, see the more general object property: expresses
is concept expressed by
è un concetto espresso da
A more generic relation holding between a Description and a Concept. In order to be used, a Concept must be previously definedIn another Description
is concept used in
è un concetto usato in
A relation stating that an Agent is internally representing a Description . E.g., 'John believes in the conspiracy theory'; 'Niels Bohr created a solar-system metaphor for his atomic theory'; 'Jacques assumes all swans are white'; 'the task force shares the attack plan'.
is conceptualized by
è concettualizzato da
A relation between an InformationRealization and a Description, e.g. 'the printout of the Italian Constitution concretelyExpresses the Italian Constitution'. It should be supplied also with a rule stating that the InformationRealization realizes an InformationObject that expresses the Description
is concretely expressed by
è espresso concretamente da
'Constituency' depends on some layering of the world described by the ontology. For example, scientific granularities (e.g. body-organ-tissue-cell) or ontological 'strata' (e.g. social-mental-biological-physical) are typical layerings.
Intuitively, a constituent is a part belonging to a lower layer. Since layering is actually a partition of the world described by the ontology, constituents are not properly classified as parts, although this kinship can be intuitive for common sense.
A desirable advantage of this distinction is that we are able to talk e.g. of physical constituents of non-physical objects (e.g. systems), while this is not possible in terms of parts.
Example of are the persons constituting a social system, the molecules constituting a person, the atoms constituting a river, etc.
In all these examples, we notice a typical discontinuity between the constituted and the constituent object: e.g. a social system is conceptualized at a different layer from the persons that constitute it, a person is conceptualized at a different layer from the molecules that constitute them, and a river is conceptualized at a different layer from the atoms that constitute it.
is constituent of
è costituente di
A relation between parameters and entities. It allows to assert generic constraints (encoded as parameters), e.g. MinimumAgeForDriving isConstraintFor John (where John is a legal subject under the TrafficLaw).
The intended semantics (not expressible in OWL) is that a Parameter isConstraintFor and Entity if the Parameter isParameterFor a Concept that classifies that Entity; moreover, it entails that a Parameter parametrizes a Region that isRegionFor that Entity. The use in OWL is therefore a shortcut to annotate what Parameter constrains what Entity
is constraint for
è un vincolo per
A relation between concepts and collections, where a Concept is said to cover a Collection; it corresponds to a link between the (reified) intensional and extensional interpretations of a (reified) class.
E.g. the collection of vintage saxophones is covered by the Concept 'Saxophone' with the Parameter 'Vintage'.
is covered by
è ricoperto da
A relation between a Description and a Concept, e.g. a Workflow for a governmental Organization defines the Role 'officer', or 'the Italian Traffic Law defines the role Vehicle'.
is defined in
è definito in
The relation between an Entity and a Description: a Description gives a unity to a Collection of parts (the components), or constituents, by assigning a Role to each of them in the context of a whole Object (the system).
A same Entity can be given different descriptions, for example, an old cradle can be given a unifying Description based on the original aesthetic design, the functionality it was built for, or a new aesthetic functionality in which it can be used as a flower pot.
is described by
è descritto da
is event included in
è un evento nel contesto di
A relation between an action and a task, e.g. 'putting some water in a pot and putting the pot on a fire until the water starts bubbling' executes the task 'boiling'.
is executed in
è eseguito mediante
A partial order relation that holds between descriptions. It represents the proper part relation between a description and another description featuring the same properties as the former, with at least one additional one.
Descriptions can be expanded either by adding other descriptions as parts, or by refining concepts that are used by them.
An 'intention' to expand must be present (unless purely formal theories are considered, but even in this case a criterion of relevance is usually active).
is expanded in
è espansa in
A relation between a dul:SocialObject (the 'meaning') and a dul:InformationObject (the 'expression').
For example: 'A Beehive is a structure in which bees are kept, typically in the form of a dome or box.' (Oxford dictionary)'; 'the term Beehive expresses the concept Beehive in my apiculture ontology'.
The intuition for 'meaning' is intended to be very broad. A separate, large comment is included in the encoding of 'expresses', for those who want to investigate more on what kind of meaning can be represented in what form.
is expressed by
è espresso da
A relation between a Description and a SocialAgent, e.g. a Constitutional Charter introduces the SocialAgent 'PresidentOfRepublic'.
is introduced by
è introdotto da
A generic, relative localization, holding between any entities. E.g. 'Rome is the seat of the Pope', 'the liver is the location of the tumor'.
For 'absolute' locations, see SpaceRegion
is location of
è una localizzazione di
A relation between collections and entities, e.g. 'the Night Watch by Rembrandt is in the Rijksmuseum collection'; 'Davide is member of the Pen Club', 'Igor is one the subjects chosen for the experiment'.
is member of
è membro di
is object included in
è un oggetto nel contesto di
A relation to represent a (past, present or future) TimeInterval at which an Entity is observable.
In order to encode a specific time, a data value should be related to the TimeInterval.
An alternative way of representing time is the datatype property: hasIntervalDate
is observable at
è osservabile a
A Concept can have a Parameter that constrains the attributes that a classified Entity can have in a certain Situation, e.g. a 4WheelDriver Role definedIn the ItalianTrafficLaw has a MinimumAge parameter on the Amount 16.
is parameter for
è un parametro per
The relation between a Parameter, e.g. 'MajorAge', and a Region, e.g. '>17 year'.
is parametrized by
è parametrizzato da
A relation between any entities, e.g. 'brain is a part of the human body'. See dul:hasPart for additional documentation.
is part of
è parte di
A relation between an object and a process, e.g. 'John took part in the discussion', 'a large mass of snow fell during the avalanche', or 'a cook, some sugar, flour, etc. are all present in the cooking of a cake'.
is participant in
è un partecipante a
Direct succession applied to situations.
E.g., 'Taking some rest is a postcondition of my search for a hotel'.
is postcondition of
è postcondizione di
Direct precedence applied to situations.
E.g., 'claiming to find nuclear weapons in a foreign country is a precondition to declare war against it'.
is precondition of
è precondizione di
See dul:hasProperPart for additional documentation.
is propert part of
A relation between entities and qualities, e.g. 'Dmitri's skin is yellowish'.
is quality of
è una qualità di
A relation between an information realization and an information object, e.g. the paper copy of the Italian Constitution realizes the text of the Constitution.
is realized by
è realizzato da
A relation between information objects and any Entity (including information objects). It can be used to talk about e.g. entities are references of proper nouns: the proper noun 'Leonardo da Vinci' isAbout the Person Leonardo da Vinci; as well as to talk about sets of entities that can be described by a common noun: the common noun 'person' isAbout the set of all persons in a domain of discourse, which can be represented in DOLCE-Ultralite as an individual of the class: Collection .
The isReferenceOf relation is irreflexive, differently from its inverse isAbout.
is reference of
è il riferimento di
The relation between entities and information realizations, e.g. between Italy and a paper copy of the text of the Italian Constitution.
is reference of information realized by
è riferimento dell'informazione realizzata da
A relation between entities and regions, e.g. 'the color of my car is red'.
is region for
è una regione di
Any relation between concepts, e.g. superordinated, conceptual parthood, having a parameter, having a task, superordination, etc.
is related to concept
è associato al concetto
Any relation between descriptions.
is related to description
è associata alla descrizione
A relation between a description and a role, e.g. the role 'Ingredient' is defined in the recipe for a cake.
is role defined in
è un ruolo definito in
A relation between an object and a role, e.g. 'student' is the role of 'John'.
is role of
è un ruolo di
A relation between a Situation and a Description, e.g. the execution of a Plan satisfies that plan.
is satisfied by
è soddisfatta da
A relation between situations and entities, e.g. 'this morning I've prepared my coffee with a new fantastic Arabica', i.e.: the preparation of my coffee this morning is the setting for (an amount of) a new fantastic Arabica.
include
is setting for
A partial order relation that holds between social objects. It represents the subsumption relation between e.g. a Concept and another Concept that is broader in extensional interpretation, but narrowe in intensional interpretation.
E.g. PhDStudent Role specializes Student Role
is specialized by
è specializzato da
Direct succession applied to concepts. E.g. the role 'Officer' is subordinated to 'Director'.
is subordinated to
è subordinato a
Direct precedence applied to concepts. E.g. the role 'Executive' is superordinated to 'DepartmentManager'.
is superordinated to
è superordinato a
A relation between a description and a task, e.g. the task 'boil' is defined in a recipe for a cake.
is task defined in
è un task definito in
A relation between roles and tasks, e.g. 'students have the duty of giving exams' (i.e. the Role 'student' hasTask the Task 'giving exams').
is task of
è un obiettivo per
is time included in
è un tempo nel contesto di
The generic relation between time intervals and events.
intervallo temporale di
is time interval of
A relation to represent a (past, present or future) TimeInterval at which an Entity is observable.
In order to encode a specific time, a data value should be related to the TimeInterval.
An alternative way of representing time is the datatype property: hasIntervalDate
is time of observation of
è il tempo di osservazione di
A Collection has a unification criterion, provided by a Description; for example, a community of practice can be unified by a shared theory or interest, e.g. the community that makes research on mirror neurons shares some core knowledge about mirror neurons, which can be represented as a Description MirrorNeuronTheory that unifies the community. There can be several unifying descriptions.
is unified by
è unificato da
Generic distance relation between any Entity(s). E.g. Rome is near to Florence, astronomy is near to physics.
near to
A schematic relation between any entities, e.g. 'the chest region overlaps with the abdomen region', 'my spoken words overlap with hers', 'the time of my leave overlaps with the time of your arrival', 'fibromyalgia overlaps with other conditions'.
Subproperties and restrictions can be used to specialize overlaps for objects, events, time intervals, etc.
overlaps
sovrapposto a
The relation between a Parameter, e.g. 'MajorAgeLimit', and a Region, e.g. '18_year'.
For a more data-oriented relation, see hasDataValue
parametrizes
parametrizza
A relation between entities, expressing a 'sequence' schema.
E.g. 'year 1999 precedes 2000', 'deciding what coffee to use' precedes 'preparing coffee', 'World War II follows World War I', 'in the Milan to Rome autoroute, Bologna precedes Florence', etc.
It can then be used between tasks, processes, time intervals, spatially locate objects, situations, etc.
Subproperties can be defined in order to distinguish the different uses.
precede
precedes
A relation between an information realization and an information object, e.g. the paper copy of the Italian Constitution realizes the text of the Constitution.
realizes
realizza
The relation between entities and information realizations, e.g. between Italy and a paper copy of the text of the Italian Constitution.
realizes information about
realizza informazione che si riferisce a
Aldo Gangemi
2021-04-05T22:31:22Z
This relation is a workaround to enable local reflexivity axioms (Self) working with non-simple properties; in this case, dul:realizesInformation About.
A relation between two entities participating in a same Situation; e.g., 'Our company provides an antivenom service' (the situation is the service, the two entities are the company and the antivenom).
http://www.ontologydesignpatterns.org/ont/dul/DUL.owl
is in the same setting as
è nella stessa situazione di
A relation between a Situation and a Description, e.g. the execution of a Plan satisfies that plan.
satisfies
soddisfa
A partial order relation that holds between social objects.
It mainly represents the subsumption relation between e.g. a Concept or Description and another Concept (resp. Description) that is broader in extensional interpretation, but narrower in intensional interpretation. For example, the role PhDStudent specializes the role Student.
Another possible use is between a Collection that isCoveredBy a Concept A, and another Collection that isCoveredBy a Concept B that on its turm specializes A. For example, the 70,000 series Selmer Mark VI saxophone Collection specializes the Selmer Mark VI saxophone Collection.
specializes
specializza
A Collection has a unification criterion, provided by a Description; for example, a community of practice can be unified by a shared theory or interest, e.g. the community that makes research on mirror neurons shares some core knowledge about mirror neurons, which can be represented as a Description MirrorNeuronTheory that unifies the community. There can be several unifying descriptions.
unifica
unifies
A generic relation holding between a Description and a Concept. In order to be used, a Concept must be previously definedIn another Description. This last condition cannot be encoded for object properties in OWL.
usa il concetto
uses concept
has broader match
skos:broadMatch is used to state a hierarchical mapping link between two conceptual resources in different concept schemes.
Broader concepts are typically rendered as parents in a concept hierarchy (tree).
has broader
Relates a concept to a concept that is more general in meaning.
By convention, skos:broader is only used to assert an immediate (i.e. direct) hierarchical link between two conceptual resources.
has broader transitive
skos:broaderTransitive is a transitive superproperty of skos:broader.
By convention, skos:broaderTransitive is not used to make assertions. Rather, the properties can be used to draw inferences about the transitive closure of the hierarchical relation, which is useful e.g. when implementing a simple query expansion algorithm in a search application.
has close match
skos:closeMatch is used to link two concepts that are sufficiently similar that they can be used interchangeably in some information retrieval applications. In order to avoid the possibility of "compound errors" when combining mappings across more than two concept schemes, skos:closeMatch is not declared to be a transitive property.
skos:exactMatch is disjoint with each of the properties skos:broadMatch and skos:relatedMatch.
has exact match
skos:exactMatch is used to link two concepts, indicating a high degree of confidence that the concepts can be used interchangeably across a wide range of information retrieval applications. skos:exactMatch is a transitive property, and is a sub-property of skos:closeMatch.
has top concept
Relates, by convention, a concept scheme to a concept which is topmost in the broader/narrower concept hierarchies for that scheme, providing an entry point to these hierarchies.
is in scheme
Relates a resource (for example a concept) to a concept scheme in which it is included.
A concept may be a member of more than one concept scheme.
These concept mapping relations mirror semantic relations, and the data model defined below is similar (with the exception of skos:exactMatch) to the data model defined for semantic relations. A distinct vocabulary is provided for concept mapping relations, to provide a convenient way to differentiate links within a concept scheme from links between concept schemes. However, this pattern of usage is not a formal requirement of the SKOS data model, and relies on informal definitions of best practice.
is in mapping relation with
Relates two concepts coming, by convention, from different schemes, and that have comparable meanings
has member
Relates a collection to one of its members.
For any resource, every item in the list given as the value of the
skos:memberList property is also a value of the skos:member property.
has member list
Relates an ordered collection to the RDF list containing its members.
has narrower match
skos:narrowMatch is used to state a hierarchical mapping link between two conceptual resources in different concept schemes.
Narrower concepts are typically rendered as children in a concept hierarchy (tree).
has narrower
Relates a concept to a concept that is more specific in meaning.
By convention, skos:broader is only used to assert an immediate (i.e. direct) hierarchical link between two conceptual resources.
has narrower transitive
skos:narrowerTransitive is a transitive superproperty of skos:narrower.
By convention, skos:narrowerTransitive is not used to make assertions. Rather, the properties can be used to draw inferences about the transitive closure of the hierarchical relation, which is useful e.g. when implementing a simple query expansion algorithm in a search application.
skos:related is disjoint with skos:broaderTransitive
has related
Relates a concept to a concept with which there is an associative semantic relationship.
has related match
skos:relatedMatch is used to state an associative mapping link between two conceptual resources in different concept schemes.
is in semantic relation with
Links a concept to a concept related by meaning.
This property should not be used directly, but as a super-property for all properties denoting a relationship of meaning between concepts.
is top concept in scheme
Relates a concept to the concept scheme that it is a top level concept of.
Relation between an Actuation and the property of a FeatureOfInterest it is acting upon.
acts on property
Relation between an Actuation and the property of a FeatureOfInterest it is acting upon.
In the activity (Actuation) of automatically closing a window if the temperature in a room drops below 20 degrees Celsius, the property on which the Actuator acts upon is the state of the window as it changes from being open to being closed.
A relation between an Observation and the entity whose quality was observed, or between an Actuation and the entity whose property was modified, or between an act of Sampling and the entity that was sampled.
has feature of interest
A relation between an Observation and the entity whose quality was observed, or between an Actuation and the entity whose property was modified, or between an act of Sampling and the entity that was sampled.
For example, in an Observation of the weight of a person, the FeatureOfInterest is the person and the property is its weight.
Relation linking an Observation or Actuation or act of Sampling and a Result or Sample.
has result
Relation linking an Observation or Actuation or act of Sampling and a Result or Sample.
Relation between a FeatureOfInterest and the Sample used to represent it.
has sample
Relation between a FeatureOfInterest and the Sample used to represent it.
Relation between a Platform and a Sensor, Actuator, Sampler, or Platform, hosted or mounted on it.
hosts
Relation between a Platform and a Sensor, Actuator, Sampler, or Platform, hosted or mounted on it.
Relation between an ActuatableProperty of a FeatureOfInterest and an Actuation changing its state.
is acted on by
Relation between an ActuatableProperty of a FeatureOfInterest and an Actuation changing its state.
In the activity (Actuation) of automatically closing a window if the temperature in a room drops below 20 degrees Celsius, the property on which the Actuator acts upon is the state of the window as it changes from being open to being closed.
A relation between a FeatureOfInterest and an Observation about it, an Actuation acting on it, or an act of Sampling that sampled it.
is feature of interest of
A relation between a FeatureOfInterest and an Observation about it, an Actuation acting on it, or an act of Sampling that sampled it.
Relation between a Sensor, Actuator, Sampler, or Platform, and the Platform that it is mounted on or hosted by.
is hosted by
Relation between a Sensor, Actuator, Sampler, or Platform, and the Platform that it is mounted on or hosted by.
Relation between an ObservableProperty and the Sensor able to observe it.
is observed by
Relation between an ObservableProperty and the Sensor able to observe it.
Relation linking a Result to the Observation or Actuation or act of Sampling that created or caused it.
is result of
Relation linking a Result to the Observation or Actuation or act of Sampling that created or caused it.
Relation from a Sample to the FeatureOfInterest that it is intended to be representative of.
is sample of
Relation from a Sample to the FeatureOfInterest that it is intended to be representative of.
Relation between an Actuator and the Actuation it has made.
made actuation
Relation between an Actuator and the Actuation it has made.
Relation linking an Actuation to the Actuator that made that Actuation.
made by actuator
Relation linking an Actuation to the Actuator that made that Actuation.
Relation linking an act of Sampling to the Sampler (sampling device or entity) that made it.
made by sampler
Relation linking an act of Sampling to the Sampler (sampling device or entity) that made it.
Relation between an Observation and the Sensor which made the Observation.
made by sensor
Relation between an Observation and the Sensor which made the Observation.
Relation between a Sensor and an Observation made by the Sensor.
made observation
Relation between a Sensor and an Observation made by the Sensor.
Relation between a Sampler (sampling device or entity) and the Sampling act it performed.
made sampling
Relation between a Sampler (sampling device or entity) and the Sampling act it performed.
Relation linking an Observation to the property that was observed. The ObservableProperty should be a property of the FeatureOfInterest (linked by hasFeatureOfInterest) of this Observation.
observed property
Relation linking an Observation to the property that was observed. The ObservableProperty should be a property of the FeatureOfInterest (linked by hasFeatureOfInterest) of this Observation.
Relation between a Sensor and an ObservableProperty that it is capable of sensing.
observes
Relation between a Sensor and an ObservableProperty that it is capable of sensing.
The time that the Result of an Observation, Actuation or Sampling applies to the FeatureOfInterest. Not necessarily the same as the resultTime. May be an Interval or an Instant, or some other compound TemporalEntity.
phenomenon time
The time that the Result of an Observation, Actuation or Sampling applies to the FeatureOfInterest. Not necessarily the same as the resultTime. May be an Interval or an Instant, or some other compound TemporalEntity.
A relation to link to a re-usable Procedure used in making an Observation, an Actuation, or a Sample, typically through a Sensor, Actuator or Sampler.
used procedure
A relation to link to a re-usable Procedure used in making an Observation, an Actuation, or a Sample, typically through a Sensor, Actuator or Sampler.
has sample relationship
Links a sample to a sample relationship (which links to a related sample)
nature of (sample) relationship
Links a SampleRelationship to an indication of the nature of the relationship
related sample
Links a sample relationship to the related sample
Context
Actuation.png
Context.png
2018-04-15T05:35:00
[Unity]
affects
Status: *STABLE*
Connects a Context.Event with a set of related Context.Objects that captures a perspective of their interactions over time.
affects
Connecting a Context.Event with related Context.Objects
AnnotationTag
Model_(SOSA-SSN).png
2018-02-08T01:55:00
analyzes
Status: *STABLE*
Indicates that an Aspect is analyzed by a Model (throughout its States): analyzing looking for specific kind of Markers. A Model is specific to the purpose of its BCI application, such as: stress level measurement or fatigue detection.
analyzes
Connecting a Model with an Aspect.
Observations
Record_(SOSA-SSN).png
2017-08-31T03:19:00
[SSN]
aspectOfInterest
Status: *STABLE*
Connects a Record with its correspondent Aspect. This can be read, as follow: "A Record is generated by capturing an Aspect (of interest)". This object property is a subproperty of sosa:hasFeatureOfInterest: [sosa:Observation] ------ (sosa:hasFeatureOfInterest) ------ [sosa:FeatureOfInterest] [Record] -------------------- (aspectOfInterest) ---------------------- [Aspect]
aspect of interest
Connecting a Record individual with its correspondent Aspect
Context
Context.png
2018-04-15T03:44:00
[Unity]
canPerform
Status: *STABLE*
States that a Context.AutonomousBeing "can perform" a set of Context.Capability-ies.
can perform
Connecting a Context.AutonomousBeing that can perform a set of Context.Capability-ies.
Context
Context.png
2018-04-15T05:35:00
[Unity]
causes
Status: *STABLE*
Connects a Context.Object with a set of related Context.Events that captures a perspective of its interactions over time.
causes
Connecting a Context.Object with related Context.Events
Actuation,Context
Actuation.png
2018-04-15T03:38:00
[SSN]
changes
Status: *STABLE*
Represents the relationship from an ActuationEvent to the thing or object (ActuationTarget) whose property (ImpactedProperty) is being manipulated by an Actuator.
%GENERAL_EXAMPLE%@Actuation-Use-Case
changes
Actuation
Actuation.png
2018-01-10T05:13:00
[Seydoux2016]
consumesInputFrom
Status: *STABLE*
A Command consumes its input from a Record. Following the Actuation-Actuator-Effect (AAE) design pattern proposed for the IoT Ontology (IoT-O), this relationship is based on the following definition: [Actuator]** ------ (consumes) ------ [Input] ** Via the execution of a Command.
%GENERAL_EXAMPLE%@Actuation-Use-Case
consumes input from
Connecting a Command with a Record.
Context
Context.png
2018-04-15T03:44:00
[Unity]
coparticipatesIn
Status: *STABLE*
Connects a Context.Object as a coparticipant in a set of Context.Methods.
coparticipates in
Connecting a Context.Object that coparticipates in a set of Context.Methods.
Context
Context.png
2018-04-15T03:44:00
[Unity]
definesBehaviorOf
Status: *STABLE*
Connects a Context.Method with a set of Context.Objects that models a perspective of their expected behavior.
defines behavior of
Connecting a Context.Method that models the behavior of a some Context.Objects.
Sensors
StimulusEvent_(SOSA-SSN).png
2017-09-14T02:02:00
[SSN]
detects
Status: *STABLE*
Connects a Device with its correspondent StimulusEvent set. This can be read, as follow: "A Device detects StimulusEvent". This object property is a subproperty of ssn:detects: [sosa:Sensor] ------ (ssn:detects) ------ [ssn:Stimulus] [Device] ---------- (detects) -------- [StimulusEvent] [SSN] A relation from a sosa:Sensor to the ssn:Stimulus that the sosa:Sensor can detect.
detects
Connecting a Device individual with its correspondent StimulusEvent set.
Context
Context.png
2018-04-15T03:44:00
[Unity]
effectuates
Status: *STABLE*
Connects a Context.Event with some related Context.Methods as a part of an interaction.
From the perspective of the Object-Oriented Programming paradigm, this relationship captures a set of object messages in a specific time frame: an interaction between Context.Objects through their Context.Methods.
effectuates
Connecting a Context.Event with some Context.Methods.
Actuation
Actuation.png
2018-05-08T18:13:00
[Seydoux2016]
executes
Status: *STABLE*
An Actuator executes a Command. Following the Actuation-Actuator-Effect (AAE) design pattern proposed for the IoT Ontology (IoT-O), this relationship expands the following definition: [Actuator] ------ (consumes) ------ [Input]** ** The Input from a Record via the execution of a Command.
See general remark about: 2_MAPPINGS-TO-SAN
%GENERAL_EXAMPLE%@Actuation-Use-Case
executes
Connecting an Actuator with a Command.
Descriptor,Observations
Aspect-and-Modality_(SOSA-SSN).png
2018-01-15T00:54:00
extends
Status: *STABLE*
Relation between a RecordChannelingSpec and a ChannelingSpec that the observation extends through the associated DeviceChannelingSpec. The object property composition (owl:propertyChainAxiom) ensures that if a DeviceChannelingSpec extends a particular ChannelingSpec, then one can infer that the RecordChannelingSpec also extends that ChannelingSpec. This extended spec of the channeling schema information object: RecordChannelingSpec.extendsDeviceChannelingSpec * DeviceChannelingSpec.extendsChannelingSpec --> RecordChannelingSpec.extends := ChannelingSpec.
extends its related modality channeling schema spec
A hasDescriptor sub property that connects a RecordChannelingSpec with its related ChannelingSpec.
Descriptor,Sensors
Aspect-and-Modality_(SOSA-SSN).png
2016-07-29T03:18:00
extendsChannelingSpec
Status: *STABLE*
Connects a DeviceChannelingSpec with its related ChannelingSpec. This relation states that a DeviceChannelingSpec individual extends its related ChannelingSpec from which was derivated.
extends its modality channeling schema spec
A hasDescriptor sub property that connects a DeviceChannelingSpec with its related ChannelingSpec.
Descriptor,Observations
Aspect-and-Modality_(SOSA-SSN).png
2016-07-29T02:21:00
extendsDeviceChannelingSpec
Status: *STABLE*
Connects a RecordChannelingSpec with its related DeviceChannelingSpec. This relation states that a RecordChannelingSpec individual extends its related DeviceChannelingSpec from which was derivated.
extends its device channeling schema spec
A hasDescriptor sub property that connects a RecordChannelingSpec with its related DeviceChannelingSpec.
SystemCapabilities
Aspect-and-Modality_(SOSA-SSN).png
2017-09-14T04:34:00
[SSN]
forModality
Status: *STABLE*
Connects a Channel to the supported Modality is described for. This can be read, as follow: "A Channel is described for (supports) Modality". This object property is a subproperty of ssn:forProperty: [ssn-system:SystemCapability] ------ (ssn:forProperty) ------ [sosa:ObservableProperty] [Channel] ---------------------- (forModality) ---------------------- [Modality] [SSN] A relation from a ssn-system:SystemCapability to the sosa:ObservableProperty the capability is described for.
See general remark about: EEG-CONCEPTS
for modality
Connecting a Channel to the supported Modality is described for.
Observations
RecordedData_(SOSA-SSN).png
2018-02-10T04:18:00
hasAccessMethod
Status: *STABLE*
Connects a RecordedData with a set of AccessMethods that describes how the data is being accessed by the BCI application.
has BCI data access method
Connecting a RecordedData with its associated AccessMethods
Session,Subject
Activity.png
2018-04-15T18:38:00
hasAction
Status: *STABLE*
Connects an Activity with its correspondent Action set.
has action
Connecting an Activity with its correspondent Action set.
Session
Session.png
2018-02-08T04:10:00
hasActivity
Status: *STABLE*
Connects a Session with its associated Activity.
has activity
Connecting a Session individual with its associated Activity.
Session,Actuation
Actuation.png
2018-02-08T02:07:00
hasActuation
Status: *STABLE*
Connects a Session with a set of Actuations that are associated with it.
%GENERAL_EXAMPLE%@Actuation-Use-Case
has actuation
Connecting a Session with its related set of Actuations
Descriptor,Actuation
Actuation.png
2017-12-11T02:30:00
hasActuatorSpec
Status: *STABLE*
Connects an (Actuator or ActuatorSpec) with its set of related ActuatorSpecs.
has actuator spec
A hasDescriptor sub property for connecting an (Actuator or ActuatorSpec) with its set of ActuatorSpecs.
Sensors
MeasurementCapability_(SOSA-SSN).png
2016-08-14T05:19:00
[XDF]
hasChannelData
Status: *STABLE*
Connects a DeviceChannelingSpec with the set of Channels that comprises its internal structure.
See general remark about: EEG-CONCEPTS
has channel data (logical component)
Connecting a DeviceChannelingSpec with its Channel set.
Observations
Aspect-and-Modality_(SOSA-SSN).png
2016-06-30T01:43:00
hasChannelingSpec
Status: *STABLE*
Connects a Modality with its related ChannelingSpec.
has channeling schema spec
A hasDescriptor sub property for connecting a Modality with its related ChannelingSpec.
Results,AnnotationTag
DataSegment.png
2018-02-08T02:39:44
hasDataBlock
Status: *STABLE*
Connects a (RecordedData or DataSegment) with its correspondent DataBlock set.
has data block set
Connecting a (RecordedData or DataSegment) individual with its correspondent DataBlock set.
Observations
RecordedData_(SOSA-SSN).png
2018-02-10T03:34:32
hasDataFormat
Status: *STABLE*
Connects a RecordedData with its corresponding DataFormat that describes the representation of the data observed by a Device.
has data format
Connecting a RecordedData with its corresponding DataFormat
AnnotationTag,Context,Descriptor,Session,SystemCapabilities,Observations,Subject
Descriptor_(SOSA-SSN).png
2018-02-08T02:43:00
hasDescriptor
Status: *STABLE*
Connects an entity with a set of Descriptors.
has external resource (descriptor)
Connecting an entity with a set of Descriptors.
Sensors
Device_(SOSA-SSN).png
2016-07-19T04:33:00
hasDeviceChannelingSpec
Status: *STABLE*
Connects a Device with its related DeviceChannelingSpec.
has device channeling schema spec
A sub property of hasDescriptor for connecting a Device with its related DeviceChannelingSpec.
Descriptor,Sensors
Device_(SOSA-SSN).png
2016-06-30T01:43:00
[XDF], [ESS]
hasDeviceSpec
Status: *STABLE*
Connects a (Device or DeviceSpec) with its set of related DeviceSpecs.
has device spec
A hasDescriptor sub property for connecting a (Device or DeviceSpec) with its set of DeviceSpecs.
EEG
MeasurementCapability_(SOSA-SSN).png
2016-08-14T05:39:00
[XDF]
hasEegChannelData
--Concerning EEG, this ontology only defines its related classes. It does not extend or define any specific properties for EEG.-- $ 04:27 AM 2016-07-29 $
true
Status: *STABLE*
Connects an EegDeviceChannelingSpec with the set of EegChannels that comprises its internal structure.
See general remark about: EEG-CONCEPTS
has EEG channel data
Connecting an EegDeviceChannelingSpec with its EegChannel set.
EEG
MeasurementCapability_(SOSA-SSN).png
2016-08-10T06:48:00
[SSN]
hasEegNonChannelData
--Concerning EEG, this ontology only defines its related classes. It does not extend or define any specific properties for EEG.-- $ 04:27 AM 2016-07-29 $
true
Status: *STABLE*
[SSN] Relation from a EegDevice to its EegNonChannel describing the non-channeling measurement capabilities (a set of measurement properties) of the EEG BCI device.
See general remark about: EEG-CONCEPTS
This ontology leaves open to BCI applications the way how they should describe properly basic non-channeling measurement capabilities for its relevant set of different classes of sensors (Device class hierarchy) used in BCI activities, following the description of the ssn-system:SystemCapability concept. Based on their system requirements, BCI applications may define a set of restrictions and specialized connections (subproperties) on the property hasNonChannelData (subproperty of ssn-system:hasSystemCapability) for each particular subclass of Device (subclass of sosa:Sensor), which describes sensors for specific types.
has non-channeling EEG data (other EEG measurement capability)
AnnotationTag
Model_(SOSA-SSN).png
Record_(SOSA-SSN).png
2018-01-03T01:13:00
hasFeatureParameter
Status: *STABLE*
A ResponseTag or a Record has a set of FeatureParameters.
has feature
Connecting a ResponseTag or a Record with its correspondent set of FeatureParameters.
Context
2016-08-14T06:22:00
hasLocation
--This ontology will not define a "location" concept of a Context. BCI applications may extend its own ontology to include this definition if necessary.-- $ 06:26 AM 2016-08-14 $
true
Status: *STABLE*
Connects a Context with an entity that represents or describes its location.
has location
Connecting a Context with an entity that represents or describes its location.
Observations
Record_(SOSA-SSN).png
2016-08-30T23:26:00
hasMeasurementProperty
Status: *STABLE*
Connects a Record with a set of ssn-system:SystemProperty-ies (see ssn-system:SystemProperty). Through this relationship, BCI applications may extend the relevant metadata set related to the Record concept.
has SSN system property
Connecting a Record with a ssn-system:SystemProperty set.
Session
Session.png
2018-01-08T04:43:00
hasMember
Status: *STABLE*
Groups a set of Sessions and/or Interactions under a Collection.
has member (groups)
Connecting a Collection with its related set of Sessions and/or Interactions.
Observations
Aspect-and-Modality_(SOSA-SSN).png
2017-08-31T12:21:00
[SSN]
hasModality
Status: *STABLE*
Connects an Aspect with its correspondent Modality set. This can be read, as follow: "An Aspect has Modality(ies)". This object property is a subproperty of ssn:hasProperty: [sosa:FeatureOfInterest] ------ (ssn:hasProperty) ------ [sosa:ObservableProperty] [Aspect] -------------------- (hasModality) ---------------------- [Modality]
has modality
Connecting an Aspect with its correspondent Modality set.
AnnotationTag
Model_(SOSA-SSN).png
2018-01-02T02:37:00
hasModel
Status: *STABLE*
A ResponseTag or FeatureParameter is associated with (has) a Model.
has model
Connecting a ResponseTag or FeatureParameter with its correspondent Model.
Context,Results,Observations
Context.Scene.png
DataBlock_(SOSA-SSN).png
2018-04-15T22:50:00
hasNext
Status: *STABLE*
Connects a (Context.Scene or Context.Event or Record or RecordedData or DataBlock) with its following (next) (Context.Scene or Context.Event or Record or RecordedData or DataBlock) of the sequence.
(*) [Context.Scene] On a Video Game: (Level 3-2) hasNext (Level 3-3). (*) [Context.Event]: links to the following event on a sequence. (eating a meal) hasNext (taking medicine). (*) [Record]: an observation is linked to its following observation. Their difference could be on their channeling settings. (*) [RecordedData]: links to the following data version of the current data set. (*) [DataBlock]: points to the following data unit value from the current one along the sequence.
has next (following)
Connecting a (Context.Scene or Context.Event or Record or RecordedData or DataBlock) with its next correspondent (Context.Scene or Context.Event or Record or RecordedData or DataBlock).
Sensors
MeasurementCapability_(SOSA-SSN).png
2017-08-30T22:44:00
[SSN]
hasNonChannelData
Status: *STABLE*
[SSN] Relation from a Device to a NonChannel describing the non-channeling measurement capabilities (a set of measurement properties) of the BCI device.
See general remark about: 1_MAPPINGS-TO-SOSA-SSN
See general remark about: EEG-CONCEPTS
This ontology leaves open to BCI applications the way how they should describe properly basic non-channeling measurement capabilities for its relevant set of different classes of sensors (Device class hierarchy) used in BCI activities, as explained in [SSN] System Capabilities Module and [oldSSN] MeasuringCapability Module. Based on their system requirements, BCI applications may define a set of restrictions and specialized connections (subproperties) on the property hasNonChannelData (subproperty of ssn-system:hasSystemCapability) for each particular subclass of Device (subclass of sosa:Sensor), which describes sensors for specific types.
has non-channeling data (other BCI measurement capability)
Context
Context.png
2018-02-08T04:16:00
[Unity]
hasObject
Status: *STABLE*
Connects a (Context.Scene or Context.Object) with the set of Context.Objects that comprises its internal structure.
has object
Connecting a (Context.Scene or Context.Object) with its Context.Objects
Context
Context.ObjectComponent.png
2018-03-14T03:35:00
[Unity]
hasObjectComponent
--Following closely the alignment to DUL, the concepts about Objects and Events are distinctly separated. Therefore, from a structural perspective, a Context.ObjectComponent is a Context.Object except for Context.ObjectComponent.Event which is changed to Context.Event. This object property is not being used anymore.--$ 03:21 AM 2018-03-14 $
true
Status: *STABLE*
Connects a (Context.Object or Context.ObjectComponent) with the set of Context.ObjectComponents that comprises its internal structure.
has object component
Connecting a (Context.Object or Context.ObjectComponent) with its Context.ObjectComponents
Context,Session
Playout.png
2018-02-08T03:16:00
hasPlayout
Status: *STABLE*
Connects a (Context or Session) with its set of Playouts.
has playout record
Connecting a (Context or Session) with its Playouts.
Session,Context
Activity.png
PlayoutInstant.png
2018-02-11T03:59:00
hasPlayoutInstant
Status: *STABLE*
Connects a (Playout or Context.Event or Action) with its correspondent PlayoutInstant(ces) log entries.
has playout instant
Connecting a (Playout or Context.Event or Action) individual with its correspondent PlayoutInstant(ces)
Context,Results,Observations
Context.Scene.png
DataBlock_(SOSA-SSN).png
2016-07-07T02:50:00
hasPrevious
Status: *STABLE*
Connects a (Context.Scene or Record or RecordedData or DataBlock) with its previous (Context.Scene or Record or RecordedData or DataBlock) of the sequence.
(*) [Context.Scene] On a Video Game: (Level 3-2) hasPrevious (Level 3-1). (*) [Record]: an observation is linked to its previous observation. Their difference could be on their channeling settings. (*) [RecordedData]: links to the previous data version of the current data set. (*) [DataBlock]: points to the previous data unit value from the current one along the sequence.
has previous (before)
Connecting a (Context.Scene or Record or RecordedData or DataBlock) with its previous (Context.Scene or Record or RecordedData or DataBlock).
Session,Subject
Record_(SOSA-SSN).png
2016-06-24T00:07:00
hasRecord
Status: *STABLE*
Connects a (Subject or Session) with a set of Records that are associated with it.
has BCI record
Connecting a (Subject or Session) with its related set of Records
Observations
Record_(SOSA-SSN).png
2016-07-18T03:24:00
hasRecordChannelingSpec
Status: *STABLE*
Connects a Record with its related RecordChannelingSpec.
has record channeling schema spec
A hasDescriptor sub property for connecting a Record with its related RecordChannelingSpec.
Descriptor,Observations
Record_(SOSA-SSN).png
2016-07-18T03:01:00
[XDF], [ESS]
hasRecordSpec
Status: *STABLE*
Connects a (Record or RecordSpec) with its set of related RecordSpecs.
has record spec
A hasDescriptor sub property for connecting a (Record or RecordSpec) with its set of RecordSpecs.
Context
Context.Role.png
2018-02-08T02:28:00
[Unity]
hasRole
Status: *STABLE*
Connects a Context.Object with its Context.Role.
has role
Context
Context.Scene.png
2018-02-08T03:06:00
hasScene
Status: *STABLE*
Connects a (Context or Context.Scene) with its Context.Scenes.
has scene
Connecting a (Context or Context.Scene) with its Context.Scenes
Context,Session,Subject
Session.png
2018-02-08T03:08:00
hasSession
Status: *STABLE*
Connects a (Context or Interaction or Subject) with a set of Sessions that are associated with it.
has session
Connecting a (Context or Interaction or Subject) with its related set of Sessions
AnnotationTag
Marker_(SOSA-SSN).png
2016-05-22T18:51:00
hasStimulusEvent
Status: *STABLE*
A StimulusTag is associated with (has) a StimulusEvent.
has stimulus event
Connecting a StimulusTag with its correspondent StimulusEvent.
Session,Subject
Subject.png
2016-06-23T01:55:00
hasSubject
Status: *STABLE*
Connects an Interaction with its set of Subjects.
has subject (participant)
Connecting an Interaction with its set of Subjects
Session
Session.png
2016-06-30T01:43:00
hasSubjectState
Status: *STABLE*
Connects a Session with a set of SubjectStates which describe the overall state of the Subject during the Session.
has subject state
A hasDescriptor sub property for connecting a Session with a set of SubjectStates
Observations
RecordedData_(SOSA-SSN).png
2018-01-17T05:31:00
[oldSSN], [SSN]
hasValue
--Previously, both RecordedData and DataBlock were aligned to sosa:Result.In order to keep a simple model, DataBlock's alignment was removed.Therefore, the property hasDataBlock will be used to connect these two concepts.-- $ 05:05 AM 2018-01-17 $
true
Status: *STABLE*
Connects a RecordedData with its correspondent DataBlock set. This object property is a subproperty of sosa:hasResult (previously was of oldssn:hasValue): [oldssn:SensorOutput] ------ (sosa:hasResult) ------ [oldssn:ObservationValue] [RecordedData] -------------- (hasValue) -------------------- [DataBlock]
A SPARQL triple pattern matching to find the DataBlocks of a Record via this object property would be: ?Record bci:observationResult ?RecordedData ?RecordedData bci:hasValue ?DataBlock Based on the following relationships: [Record] -------- (observationResult) ------ [RecordedData] [RecordedData] ---------- (hasValue) -------------- [DataBlock]
has value (data blocks)
Connecting a RecordedData individual with its correspondent DataBlocks
Context
Context.Scene.png
2018-04-15T03:44:00
[Unity]
includesEvent
Status: *STABLE*
Connects a Context.Scene with a set of Context.Events.
includes event
Connecting a Context.Scene with a set of Context.Events.
Actuation,Results
Actuation.png
2018-04-22T15:43:00
[Seydoux2016]
involves
Status: *STABLE*
An ActuationResult involves an ActuationEvent that causes an effect on the ActuationTarget. Following the Actuation-Actuator-Effect (AAE) design pattern proposed for the IoT Ontology (IoT-O), this concept captures the following definition: [Actuation] ------ (involves) ------ [Effect]
%GENERAL_EXAMPLE%@Actuation-Use-Case
involves
Session,Actuation
Actuation.png
2018-02-08T02:07:00
isActuationOf
Status: *STABLE*
Connects an Actuation with its associated Session.
is actuation of
Connecting an Actuation with its related Session.
SystemCapabilities
MeasurementCapability_(SOSA-SSN).png
2016-08-14T05:28:00
[XDF]
isChannelDataOf
Status: *STABLE*
Connects a Channel with its associated DeviceChannelingSpec.
See general remark about: EEG-CONCEPTS
is channel (logical component) data of
Connecting a Channel with its associated DeviceChannelingSpec definition.
EEG
MeasurementCapability_(SOSA-SSN).png
2016-08-14T05:45:00
[XDF]
isEegChannelDataOf
--Concerning EEG, this ontology only defines its related classes. It does not extend or define any specific properties for EEG.-- $ 04:27 AM 2016-07-29 $
true
Status: *STABLE*
Connects an EegChannel with its associated EegDeviceChannelingSpec.
See general remark about: EEG-CONCEPTS
is EEG channel data of
Connecting an EegChannel with its associated EegDeviceChannelingSpec definition.
Context
2018-04-15T03:44:00
[Unity]
isEventIncludedIn
Status: *STABLE*
Connects a Context.Event with a set of Context.Scenes.
is event included in
Connecting a Context.Event with a set of Context.Scenes.
Actuation
Actuation.png
2018-05-08T18:23:00
isExecutedBy
Status: *STABLE*
A Command is executed by an Actuator.
See general remark about: 2_MAPPINGS-TO-SAN
%GENERAL_EXAMPLE%@Actuation-Use-Case
is executed by
Connecting a Command with an Actuator.
Actuation
Actuation.png
2018-01-10T05:23:00
isInputFor
Status: *STABLE*
A Record is input for a Command.
%GENERAL_EXAMPLE%@Actuation-Use-Case
is input for
Connecting a Record with a Command.
Session,Subject
2018-02-05T04:46:00
isMemberOf
Status: *STABLE*
Connects a set of Sessions and/or Interactions with a Collection.
is member of
Connecting a set of Sessions and/or Interactions with a Collection.
Observations
Aspect-and-Modality_(SOSA-SSN).png
2017-08-31T02:28:00
[SSN]
isModalityOf
Status: *STABLE*
Connects a Modality with its correspondent Aspect. This can be read, as follow: "A Modality is modality of Aspect". This object property is a subproperty of ssn:isPropertyOf: [sosa:ObservableProperty] ------ (ssn:isPropertyOf) ------ [sosa:FeatureOfInterest] [Modality] -------------------- (isModalityOf) -------------------- [Aspect]
is modality of
Connecting a Modality with its correspondent Aspect.
AnnotationTag
Model_(SOSA-SSN).png
2018-01-02T02:45:00
isModelOf
Status: *STABLE*
A Model has associated a set of ResponseTags or FeatureParameters.
is model of
Connecting a Model with its correspondent set of ResponseTags or FeatureParameters.
Observations
Record_(SOSA-SSN).png
RecordedData_(SOSA-SSN).png
2018-01-25T02:30:00
isObservationResultOf
Status: *STABLE*
Connects a RecordedData set with its correspondent Record.
is observation result of (a BCI Record)
Connecting a RecordedData set with its correspondent Record
Context
2018-02-08T02:41:00
isPlayoutInstantOf
Status: *STABLE*
Connects a PlayoutInstant log entry with its correspondent individual (Playout or Context.Event or Action) that issued its creation.
is the playout instant of
Connecting a PlayoutInstant with its correspondent (Playout or Context.Event or Action) individual.
Context
Playout.png
2018-02-08T03:17:00
isPlayoutOf
Status: *STABLE*
Connects a set of Playouts with its corresponding (Context or Session).
is playout record of
Connecting an individual with exactly one (Context or Session)
Observations
RecordedData_(SOSA-SSN).png
2018-01-25T02:51:00
[oldSSN]
isProducedByDevice
Status: *STABLE*
Connects a RecordedData with its correspondent Device that produced it. This can be read, as follow: "A RecordedData is produced by a Device". This supported functionality: RecordedData.sosa:isResultOf * Record.sosa:madeBySensor --> RecordedData.isProducedByDevice := Device.
See general remark about: 1_MAPPINGS-TO-SOSA-SSN
This object property was previously defined as a subproperty of oldssn:isProducedBy: [oldssn:SensorOutput] ------ (oldssn:isProducedBy) ------ [oldssn:Sensor] [RecordedData] -------- (isProducedByDevice) ---------- [Device]
is produced by device
Connecting a RecordedData with its correspondent Device that produced it.
Observations
StimulusEvent_(SOSA-SSN).png
2017-09-14T03:35:00
[SSN]
isProxyFor
Status: *STABLE*
Connects a StimulusEvent with its associated Modality-ies. This can be read, as follow: "A StimulusEvent is proxy for a Modality". This object property is a subproperty of ssn:isProxyFor: [ssn:Stimulus] ------ (ssn:isProxyFor) ------ [sosa:ObservableProperty] [StimulusEvent] ------ (isProxyFor) -------------------- [Modality]
The following descriptions capture the definition of this relation (4.2.13 Stimuli-Centered, 5.3.1.2.1 Stimuli) adjusted to this ontology: (*) The role of StimulusEvents as a proxy between the Device and the object of sensing (Context.Object). (*) A StimulusEvent may only be usable as proxy for a specific region of an observed Modality.
is proxy for
Connecting a StimulusEvent with its associated Modality(ies).
Observations
Record_(SOSA-SSN).png
2016-06-24T00:16:00
isRecordOf
Status: *STABLE*
Connects a Record with its associated (Subject or Session).
is BCI record of
Connecting a Record with its related (Subject or Session).
AnnotationTag
DataSegment.png
2018-04-18T03:16:00
isReferencedBy
Status: *STABLE*
Connects a DataSegment with a set of Markers.
is referenced by
Connecting a DataSegment individual with a set of Markers.
Session
Session.png
2018-02-08T03:33:00
isSessionOf
Status: *STABLE*
Connects a Session with its associated (Context or Interaction or Subject).
is session of
Connecting a Session with its related (Context or Interaction or Subject).
Context,Observations
Marker_(SOSA-SSN).png
2016-06-23T01:55:00
isStimulusEventOf
Status: *STABLE*
A StimulusEvent generates a set of StimulusTags.
is stimulus event of (generates)
Connecting a StimulusEvent with its set of StimulusTags.
Subject
Subject.png
2016-06-23T01:55:00
isSubjectOf
Status: *STABLE*
Connects a Subject with a set of Interactions where he/she participates in.
is subject of (participates in)
Connecting a Subject with a set of Interactions where he/she participates in.
Results
2018-01-17T05:19:00
isValueOf
--Previously, both RecordedData and DataBlock were aligned to sosa:Result.In order to keep a simple model, DataBlock's alignment was removed.Therefore, the property hasDataBlock will be used to connect these two concepts.-- $ 05:05 AM 2018-01-17 $
true
Status: *STABLE*
Connects a DataBlock set with its correspondent RecordedData.
is value of (recorded data)
Connecting a DataBlock individual with its correspondent RecordedData
Context,Subject
Context.png
Subject.png
2018-04-16T00:35:00
[Unity]
issues
Status: *STABLE*
Connects a Subject with a set of Actions that captures the interactions that she performs in the Context.
A Subject interacts with a Context throughout a set of Actions that she issues while performing an Activity during a Session.
issues
Connecting a Subject with related Actions.
EEG
EegRecord_(SOSA-SSN).png
2016-07-22T02:47:00
madeEegRecord
--Concerning EEG, this ontology only defines its related classes. It does not extend or define any specific properties for EEG.-- $ 04:27 AM 2016-07-29 $
true
Status: *STABLE*
Connects an EegDevice with its correspondent EegRecords. This can be read, as follow: "An EegDevice made EegRecords". This object property is a subproperty of madeRecord: [Device] ------------ (madeRecord) ------------ [Record] [EegDevice] ------ (madeEegRecord) ------ [EegRecord]
See general remark about: EEG-CONCEPTS
made EEG record
Connecting an EegDevice individual with its correspondent EegRecord
Sensors
Record_(SOSA-SSN).png
2017-08-31T03:47:00
[SSN]
madeRecord
Status: *STABLE*
Connects a Device with its correspondent Records. This can be read, as follow: "A Device made Records". This object property is a subproperty of sosa:madeObservation: [sosa:Sensor] ------ (sosa:madeObservation) ------ [sosa:Observation] [Device] -------------------- (madeRecord) ------------------ [Record]
See general remark about: EEG-CONCEPTS
made BCI record
Connecting a Device individual with its correspondent Record
Observations
Record_(SOSA-SSN).png
RecordedData_(SOSA-SSN).png
2017-09-14T03:30:00
[oldSSN], [SSN]
observationResult
Status: *STABLE*
Connects a Record with its correspondent RecordedData set. This can be read, as follow: "A Record has as its observation result a RecordedData set". This object property is a subproperty of sosa:hasResult (previously was of oldssn:observationResult): [sosa:Observation] ------ (sosa:hasResult) ------ [oldssn:SensorOutput] [Record] ---------- (observationResult) ------ [RecordedData]
observation result (of a BCI record)
Connecting a Record individual with its correspondent RecordedData set
Observations
Record_(SOSA-SSN).png
2017-08-31T03:47:00
[SSN]
observedByDevice
Status: *STABLE*
Connects a Record with its correspondent Device. This can be read, as follow: "A Record is observed by a Device". This object property is a subproperty of sosa:madeBySensor: [sosa:Observation] ------ (sosa:madeBySensor) ------ [sosa:Sensor] [Record] ------------ (observedByDevice) ---------- [Device]
See general remark about: EEG-CONCEPTS
observed by device
Connecting a Record individual with its correspondent Device
EEG
EegRecord_(SOSA-SSN).png
2016-07-22T02:47:00
observedByEegDevice
--Concerning EEG, this ontology only defines its related classes. It does not extend or define any specific properties for EEG.-- $ 04:27 AM 2016-07-29 $
true
Status: *STABLE*
Connects an EegRecord with its correspondent EegDevice. This can be read, as follow: "An EegRecord is observed by an EegDevice". This object property is a subproperty of observedByDevice: [Record] ------------ (observedByDevice) ------------ [Device] [EegRecord] ------ (observedByEegDevice) ------ [EegDevice]
See general remark about: EEG-CONCEPTS
observed by EEG device
Connecting an EegRecord individual with its correspondent EegDevice
Observations
Record_(SOSA-SSN).png
2016-09-06T05:10:00
[SSN], [ESS]
observedModality
Status: *STABLE*
Connects a Record with its correspondent Modality. This can be read, as follow: "A Record observes a Modality". This object property is a subproperty of sosa:observedProperty: [sosa:Observation] ------ (sosa:observedProperty) ------ [sosa:ObservableProperty] [Record] -------------- (observedModality) -------------------- [Modality]
[ESS 1.0]: A record (bci:Record) has a specific (single) defined modality (bci:Modality). [ESS 2.0]: A record (bci:Session) has a specific defined RecordedParameterSet, which groups various (multiple) RecordedModality (Modality-ies). Therefore, a record (bci:Session) can be associated with multiple RecordedModality (bci:Modality) definitions.
observed modality
Connecting a Record individual with its correspondent Modality
Sensors
Device_(SOSA-SSN).png
MeasurementCapability_(SOSA-SSN).png
Record_(SOSA-SSN).png
2017-09-14T04:02:00
[SSN], [Compton2009]
observes
Status: *STABLE*
[SSN] Relation between a Device (sosa:Sensor) and a Modality (sosa:ObservableProperty) that the sensor supports (can observe). The object property composition (owl:propertyChainAxiom) ensures that if a Record (sosa:Observation) is made of a particular Modality (sosa:ObservableProperty), then one can infer that the Device (sosa:Sensor) supports (observes) that Modality (quality). This supported functionality: (*) Device.madeRecord * Record.observedModality --> Device.observes := Modality. (*) Device.hasDeviceChannelingSpec * DeviceChannelingSpec.hasChannelData * Channel.forModality --> Device.observes := Modality.
See general remark about: 1_MAPPINGS-TO-SOSA-SSN
See general remark about: EEG-CONCEPTS
observes
Connecting a Device individual with its correspondent supported Modality individual.
SystemCapabilities
Aspect-and-Modality_(SOSA-SSN).png
2017-08-09T02:39:00
[SSN]
ofAspect
--The property oldssn:ofFeature was deprecated in SOSA/SSN. Therefore, it will be also deprecated in BCI-O.-- $ 02:39 AM 2017-08-09 $
true
Status: *STABLE*
Connects a Channel to the Aspect is described for. This can be read, as follow: "A Channel is use to describe a property of an Aspect". This object property is a subproperty of ssn:ofFeature: [ssn-system:SystemCapability] ------ (ssn:ofFeature) ------ [sosa:FeatureOfInterest] [Channel] ---------------------- (ofAspect) -------------------- [Aspect] [SSN] A relation from a ssn-system:SystemCapability to the sosa:FeatureOfInterest the capability is described for. (Used in conjunction with ssn:forProperty).
See general remark about: EEG-CONCEPTS
of aspect
Connecting a Channel to the Aspect is described for.
AnnotationTag
DataSegment.png
2018-04-18T03:07:00
pointsTo
Status: *STABLE*
Connects a Marker with a DataSegment.
points to
Connecting a Marker individual with a DataSegment.
Context
Context.png
2018-04-15T23:44:00
[Unity]
raises
Status: *STABLE*
Connects a Context.Method with some related Context.Events as a part of an Context.Object interaction that marks their beginning.
From the perspective of the Object-Oriented Programming paradigm, this relationship captures an object message that defines the beginning of a set of events (Context.Events).
raises
Connecting a Context.Method that origins some Context.Events.
Actuation
Actuation.png
2018-05-08T18:43:00
[Seydoux2016]
triggers
Status: *STABLE*
An Actuator triggers an ActuationEvent that causes an effect on the ActuationTarget. Following the Actuation-Actuator-Effect (AAE) design pattern proposed for the IoT Ontology (IoT-O), this concept captures the following definition: [san:Actuator] ------ (triggers) ------ [san:Effect]
See general remark about: 2_MAPPINGS-TO-SAN
%GENERAL_EXAMPLE%@Actuation-Use-Case
triggers
A datatype property that encodes values from a datatype for an Entity.
There are several ways to encode values in DOLCE (Ultralite):
1) Directly assert an xsd:_ value to an Entity by using hasDataValue
2) Assert a Region for an Entity by using hasRegion, and then assert an xsd:_ value to that Region, by using hasRegionDataValue
3) Assert a Quality for an Entity by using hasQuality, then assert a Region for that Quality, and assert an xsd:_ value to that Region, by using hasRegionDataValue
4) When the value is required, but not directly observed, assert a Parameter for an xsd:_ value by using hasParameterDataValue, and then associate the Parameter to an Entity by using isConstraintFor
5) When the value is required, but not directly observed, you can also assert a Parameter for a Region by using parametrizes, and then assert an xsd:_ value to that Region, by using hasRegionDataValue
The five approaches obey different requirements.
For example, a simple value can be easily asserted by using pattern (1), but if one needs to assert an interval between two values, a Region should be introduced to materialize that interval, as pattern (2) suggests.
Furthermore, if one needs to distinguish the individual Quality of a value, e.g. the particular nature of the density of a substance, pattern (3) can be used.
Patterns (4) and (5) should be used instead when a constraint or a selection is modeled, independently from the actual observation of values in the real world.
ha valore
has data value
A datatype property that encodes values from xsd:dateTime for an Event; a same Event can have more than one xsd:dateTime value: begin date, end date, date at which the interval holds, etc.
evento ha data
has event date
A datatype property that encodes values from xsd:dateTime for a TimeInterval; a same TimeInterval can have more than one xsd:dateTime value: begin date, end date, date at which the interval holds, etc.
has interval date
intervallo ha data
Parametrizes values from a datatype. For example, a Parameter MinimumAgeForDriving hasParameterDataValue 18 on datatype xsd:int, in the Italian traffic code. In this example, MinimumAgeForDriving isDefinedIn the Norm ItalianTrafficCodeAgeDriving.
More complex parametrization requires workarounds. E.g. AgeRangeForDrugUsage could parametrize data value: 14 to 50 on the datatype: xsd:int. Since complex datatypes are not allowed in OWL1.0, a solution to this can only work by creating two 'sub-parameters': MinimumAgeForDrugUsage (that hasParameterDataValue 14) and MaximumAgeForDrugUsage (that hasParameterDataValue 50), which are components of (cf. hasComponent) the main Parameter AgeRangeForDrugUsage.
Ordering on subparameters can be created by using or specializing the object property 'precedes'.
ha valore
has parameter data value
A datatype property that encodes values for a Region, e.g. a float for the Region Height.
has region data value
regione ha valore
notation
A notation, also known as classification code, is a string of characters such as "T58.5" or "303.4833" used to uniquely identify a concept within the scope of a given concept scheme.
By convention, skos:notation is used with a typed literal in the object position of the triple.
The simple value of an Observation or Actuation or act of Sampling.
has simple result
The simple value of an Observation or Actuation or act of Sampling.
For instance, the values 23 or true.
The result time is the instant of time when the Observation, Actuation or Sampling activity was completed.
result time
The result time is the instant of time when the Observation, Actuation or Sampling activity was completed.
AnnotationTag
Marker_(SOSA-SSN).png
[Associated data type spec]. Its Domain value consists of approximated arbitrary real numbers in the close range of [0..1].
2016-05-22T21:23:00
hasConfidence
Status: *STABLE*
Captures the accuracy (statistical level of confidence) of the ResponseTag.
Example: 0.75
has confidence
Descriptor
Descriptor_(SOSA-SSN).png
[Associated data type spec].
2016-06-29T00:35:00
[OWL-Time]
hasDateTime
Status: *STABLE*
XSD dateTime associated to an entity. BCI applications should, at least, measure these time values in minutes.
has date time
Observations
Descriptor_(SOSA-SSN).png
Record_(SOSA-SSN).png
[Associated data type spec].
2016-07-19T01:59:00
[ESS], [XDF]
hasEndChannel
Status: *STABLE*
The channel number in the recording where the modality block ends.
has end channel
Context,Session,Observations,Actuation
TimeInterval.png
[Associated data type spec].
2018-02-10T03:44:00
[OWL-Time]
hasEndTime
Status: *STABLE*
XSD dateTime associated to an entity, that indicates the ending-point of a time interval. BCI applications should, at least, measure these time values in seconds.
For simplicity, this ontology does not define explicitly a TimeInterval concept.
has end (final) date time
Observations
Aspect-and-Modality_(SOSA-SSN).png
[Associated data type spec]. Its Domain value is all the positive integers: { 1, 2, 3, ... }
2016-05-24T00:21:00
hasIntensityLevel
Status: *STABLE*
Indicates the level of intensity related to its concept.
Aspect: the measurement of the intensity depends on the nature of the Aspect and purpose of the BCI application.
has intensity level
AnnotationTag,SystemCapabilities
[Associated data type spec].
2018-05-20T23:19:00
[ESS], [XDF]
hasLabel
Status: *STABLE*
A human-readable and descriptive label for general identification purposes associated to an instance.
If necessary, BCI applications may extend the definition of this attribute to specify the preferred notation scheme, using the semantic annotation skos:notation.
has label
(*) In a Channel: this attribute is used for search (access) purposes, according to a preferred labeling scheme (see the editorial note). (*) In a Marker: this attribute indicates a marker type. For example: "110" = "Red light being flashed".
Descriptor,Observations
Descriptor_(SOSA-SSN).png
RecordedData_(SOSA-SSN).png
[Associated data type spec].
2018-05-19T19:42:00
hasLocator
Status: *STABLE*
An IRI locator to access a Web resource.
A http-schemed IRI has the ability for content-type negotiation; and thus, it can process the media type of the Web resource.
has locator (IRI)
(*) AccessMethod: access to the Web resource that represents the Data File that storages the Record. (*) Descriptor: external Web resource IRI.
Observations
RecordedData_(SOSA-SSN).png
A broker (or MQTT Server) conforms to the following definition in the latest OASIS MQTT specification: MQTT Version 3.1.1 Plus Errata 01(OASIS Standard Incorporating Approved Errata 01 10 December 2015) -- 1.2 Terminology -- (Server).
[Associated data type spec].
As defined in: MQTT Version 3.1.1 Plus Errata 01(OASIS Standard Incorporating Approved Errata 01 10 December 2015) -- 1. Introduction -- (1.5 Data representations).
2016-07-05T04:35:00
hasMQTT.Broker
--This ontology should be agnostic to the data access-- $ 04:35 AM 2016-07-05 $
true
Status: *STABLE*
Describes the broker (a MQTT Server) parameter in an AccessMethod.MQTT connection.
has MQTT broker
Observations
RecordedData_(SOSA-SSN).png
A Client Identifier (or ClientId) conforms to the following definition in the latest OASIS MQTT specification: MQTT Version 3.1.1 Plus Errata 01(OASIS Standard Incorporating Approved Errata 01 10 December 2015) -- 3. MQTT Control Packets / 3.1. CONNECT / 3.1.3. Payload -- (3.1.3.1. Client Identifier).
[Associated data type spec].
The Client Identifier MUST be a UTF-8 encoded string as defined in Section 1.5.3 of the latest OASIS MQTT specification: MQTT Version 3.1.1 Plus Errata 01(OASIS Standard Incorporating Approved Errata 01 10 December 2015) -- 1. Introduction -- (1.5 Data representations).
2016-07-05T04:35:00
hasMQTT.ID
--This ontology should be agnostic to the data access-- $ 04:35 AM 2016-07-05 $
true
Status: *STABLE*
Describes the Client Identifier (ClientId) parameter in an AccessMethod.MQTT connection. This parameter identifies the Client to the MQTT Server.
has MQTT ID
Observations
RecordedData_(SOSA-SSN).png
A Topic conforms to the following definitions in the latest OASIS MQTT specification: MQTT Version 3.1.1 Plus Errata 01(OASIS Standard Incorporating Approved Errata 01 10 December 2015) -- (*) 3. MQTT Control Packets / 3.3 PUBLISH -- Publish message -- (3.3.2.1. Topic Name). (*) 3. MQTT Control Packets / 3.8 SUBSCRIBE -- Subscribe to topics -- (3.8.3 Payload).
[Associated data type spec].
The Topic MUST be a UTF-8 encoded string as defined in Section 1.5.3 of the latest OASIS MQTT specification: MQTT Version 3.1.1 Plus Errata 01 (OASIS Standard Incorporating Approved Errata 01 10 December 2015) -- 1. Introduction -- (1.5 Data representations).
2016-07-05T04:35:00
hasMQTT.Topic
--This ontology should be agnostic to the data access-- $ 04:35 AM 2016-07-05 $
true
Status: *STABLE*
Describes the Topic (name or filter) parameter in an AccessMethod.MQTT connection. This parameter identifies either: (*) The Topic Name (for the PUBLISH message), which identifies the information channel to which payload data is published. (*) The Topic Filter (for the SUBSCRIBE message), which corresponds to the Clients interest in one or more Topics (each subscription registers a Clients interest in a Server).The payload of a SUBSCRIBE packet contains at least one Topic Filter.
has MQTT topic
The usage of this parameter (for either subscribe to specific topics) depends on the purpose and implementation of the BCI application.
AnnotationTag
Marker_(SOSA-SSN).png
[Associated data type spec].
2016-05-23T02:03:00
hasModelIRI
Status: *STABLE*
It's the IRI of the resource that describes or represents the Model or classifier. A Model can be described in any language or format, such as PMML.
has model IRI
Sensors,Observations
Descriptor_(SOSA-SSN).png
Record_(SOSA-SSN).png
[Associated data type spec].
=== EEG 10/20 system channeling schema. ===
2016-10-12T01:57:00
[ESS], [XDF]
hasNumberOfChannels
Status: *STABLE*
Captures the number of channels used in a Record or supported by a Device. Its value is expected to be a positive integer.
[XDF]: channel_count is a non-negative integer that encodes the number of channels in the stream. [ESS 1.0]: number of (used) data channels.
Instead of using this generic datatype property, some BCI applications could define the following specific attributes, according to the recording setup, DeviceChannelingSpec and Record's Modality: (*) Number of used LEDs. (*) Number of used cameras.
has number of channels
Results
DataBlock_(SOSA-SSN).png
[Associated data type spec].
2016-06-12T04:17:00
hasOffset
Status: *STABLE*
Indicates the milliseconds.
has offset
Results
DataBlock_(SOSA-SSN).png
[Associated data type spec].
2016-06-12T03:44:00
hasOrdinalPosition
Status: *STABLE*
Indicates the ordinal position of the DataBlock.
has (ordinal) position
Observations
Record_(SOSA-SSN).png
[Associated data type spec].
2016-07-19T02:55:00
[XDF]
hasSampleCount
Status: *STABLE*
Counts the number of samples of the Record.
[XDF] It is defined in the StreamFooter chunk as sample_count.
has sample count
Observations
Record_(SOSA-SSN).png
[Associated data type spec].
2017-08-30T23:58:00
[ESS], [XDF]
hasSamplingRate
Status: *STABLE*
Sampling rate of the recording (Record). Its measurement unit is Hertz (Hz).
Related concept for a Device: SamplingRate.
[ESS 2.0] Sampling rate of the modality may be recorded at a different sampling rate.
(a record) has sampling rate
Observations
Descriptor_(SOSA-SSN).png
Record_(SOSA-SSN).png
[Associated data type spec].
2016-07-19T01:59:00
[ESS], [XDF]
hasStartChannel
Status: *STABLE*
The channel number in the recording where the modality block starts.
has start channel
Context,Session,Observations,Actuation
TimeInterval.png
[Associated data type spec].
2018-02-10T03:44:00
[OWL-Time]
hasStartTime
Status: *STABLE*
XSD dateTime associated to an entity, that indicates the starting-point of a time interval. BCI applications should, at least, measure these time values in seconds.
For simplicity, this ontology does not define explicitly a TimeInterval concept.
has start (initial) date time
AnnotationTag
Marker_(SOSA-SSN).png
[Associated data type spec].
2016-05-22T21:23:00
hasState
Status: *STABLE*
Captures the alphabet symbol of the ResponseTag, representating its "State". BCI domain applications can define their own specific symbolic scheme to represent its relevant States.
An alphabet-based symbolic scheme, could be: "A B C D E...". Thus, a specific State, would be: {"B"}
has state
AnnotationTag,Context,Results,SystemCapabilities
Marker_(SOSA-SSN).png
PlayoutInstant.png
[Associated data type spec].
2016-06-29T00:34:00
[OWL-Time]
hasTimeStamp
Status: *STABLE*
XSD dateTimeStamp of a specific time instant. BCI applications are recommended to measure the time with high precision, in order to keep a proper granularity for this measurement unit. BCI applications should, at least, measure the time instants in milliseconds.
has time stamp
Context,Descriptor,Session
[Associated data type spec].
2016-09-29T03:24:00
hasTitle
Status: *STABLE*
The given title or logical name of an entity. It is used to associate a human-readable label to entities.
has title
File
Stream
Observations
RecordedData_(SOSA-SSN).png
[Associated data type spec].
2016-06-01T01:55:00
hasType
Status: *STABLE*
Indicates the AccessMethod's type: the nature of how the RecordedData can be accessed. This ontology only defines the two following Access Method Type: (*) "File": archived access method. (*) "Stream": real-time access method.
has access method type (nature)
dbp:Person
old-ssn:Observation
oldssn:SensorDataSheet
voaf:Vocabulary
Abstract
Any Entity that cannot be located in space-time. E.g. mathematical entities: formal semantics elements, regions within dimensional spaces, etc.
Abstract
Astratto
1
Action
DUL:Action
An Event with at least one Agent that isParticipantIn it, and that executes a Task that typically isDefinedIn a Plan, Workflow, Project, etc.
Action
Azione
Agent
Additional comment: a computational agent can be considered as a PhysicalAgent that realizes a certain class of algorithms (that can be considered as instances of InformationObject) that allow to obtain some behaviors that are considered typical of agents in general. For an ontology of computational objects based on DOLCE see e.g. http://www.loa-cnr.it/COS/COS.owl, and http://www.loa-cnr.it/KCO/KCO.owl.
Any agentive Object , either physical (e.g. a whale, a robot, an oak), or social (e.g. a corporation, an institution, a community).
Agent
Agente
Amount
A quantity, independently from how it is measured, computed, etc.
Amount
QuantitÃ
BiologicalObject
Biological object
ChemicalObject
Chemical object
Classification
A special kind of Situation that allows to include time indexing for the classifies relation in situations. For example, if a Situation s 'my old cradle is used in these days as a flower pot' isSettingFor the entity 'my old cradle' and the TimeIntervals '8June2007' and '10June2007', and we know that s satisfies a functional Description for aesthetic objects, which defines the Concepts 'flower pot' and 'flower', then we also need to know what concept classifies 'my old cradle' at what time.
In order to solve this issue, we need to create a sub-situation s' for the classification time: 'my old cradle is a flower pot in 8June2007'. Such sub-situation s' isPartOf s.
Classification
Classificazione
time-indexed classification
Collection
DUL:Collection
Any container for entities that share one or more common properties. E.g. "stone objects", "the nurses", "the Louvre Aegyptian collection", all the elections for the Italian President of the Republic.
A collection is not a logical class: a collection is a first-order entity, while a class is second-order.
A collection is neither an aggregate of its member entities (see e.g. ObjectAggregate class).
Collection
Collezione
Collective
A Collection whose members are agents, e.g. "the nurses", "the Italian rockabilly fans".
Collectives, facon de parler, can act as agents, although they are not assumed here to be agents (they are even disjoint from the class SocialAgent). This is represented by admitting collectives in the range of the relations having Agent in their domain or range.
Collective
Collettivo
CollectiveAgent
A SocialAgent that is actedBy agents that are (and act as) members of a Collective. A collective agent can have roles that are also roles of those agents.
For example, in sociology, a 'group action' is the situation in which a number of people (that result to be members of a collective) in a given area behave in a coordinated way in order to achieve a (often common) goal. The Agent in such a Situation is not single, but a CollectiveAgent (a Group). This can be generalized to the notion of social movement, which assumes a large Community or even the entire Society as agents.
The difference between a CollectiveAgent and an Organization is that a Description that introduces a CollectiveAgent is also one that unifies the corresponding Collective. In practice, this difference makes collective agents 'less stable' than organizations, because they have a dedicated, publicly recognizable Description that is conceived to introduce them.
Agente collettivo
Collective agent
Community
Community
ComunitÃ
Concept
dul:Concept
A Concept is a SocialObject, and isDefinedIn some Description; once defined, a Concept can be used in other Description(s). If a Concept isDefinedIn exactly one Description, see the LocalConcept class.
The classifies relation relates Concept(s) to Entity(s) at some TimeInterval
Concept
Concetto
Configuration
A collection whose members are 'unified', i.e. organized according to a certain schema that can be represented by a Description.
Typically, a configuration is the collection that emerges out of a composed entity: an industrial artifact, a plan, a discourse, etc.
E.g. a physical book has a configuration provided by the part-whole schema that holds together its cover, pages, ink. That schema, based on the individual relations between the book and its parts, can be represented in a reified way by means of a (structural) description, which is said to 'unify' the book configuration.
Configuration
Configurazione
Contract
(The content of) an agreement between at least two agents that play a Party Role, about some contract object (a Task to be executed).
Contract
Contratto
Description
A Description is a SocialObject that represents a conceptualization.
It can be thought also as a 'descriptive context' that uses or defines concepts in order to create a view on a 'relational context' (cf. Situation) out of a set of data or observations.
For example, a Plan is a Description of some actions to be executed by agents in a certain way, with certain parameters; a Diagnosis is a Description that provides an interpretation for a set of observed entities, etc.
Descriptions 'define' or 'use' concepts, and can be 'satisfied' by situations.
Description
Descrizione
Design
A Description of the Situation, in terms of structure and function, held by an Entity for some reason.
A design is usually accompanied by the rationales behind the construction of the designed Entity (i.e. of the reasons why a design is claimed to be as such). For example, the actual design (a Situation) of a car or of a law is based on both the specification (a Description) of the structure, and the rationales used to construct cars or laws.
While designs typically describe entities to be constructed, they can also be used to describe 'refunctionalized' entities, or to hypothesize unknown functions. For example, a cradle can be refunctionalized as a flowerpot based on a certain home design.
Design
Design
DesignedArtifact
A PhysicalArtifact that is also described by a Design. This excludes simple recycling or refunctionalization of natural objects. Most common sense 'artifacts' can be included in this class: cars, lamps, houses, chips, etc.
Artefatto progettato
Designed artifact
DesignedSubstance
Diagnosis
A Description of the Situation of a system, usually applied in order to control a normal behaviour, or to explain a notable behavior (e.g. a functional breakdown).
Diagnosi
Diagnosis
Entity
dul:Entity
Anything: real, possible, or imaginary, which some modeller wants to talk about for some purpose.
Entity
EntitÃ
DUL:Event
Event
dul:Event
Any physical, social, or mental process, event, or state.
More theoretically, events can be classified in different ways, possibly based on 'aspect' (e.g. stative, continuous, accomplishement, achievement, etc.), on 'agentivity' (e.g. intentional, natural, etc.), or on 'typical participants' (e.g. human, physical, abstract, food, etc.).
Here no special direction is taken, and the following explains why: events are related to observable situations, and they can have different views at a same time.
If a position has to be suggested here anyway, the participant-based classification of events seems the most stable and appropriate for many modelling problems.
(1) Alternative aspectual views
Consider a same event 'rock erosion in the Sinni valley': it can be conceptualized as an accomplishment (what has brought a certain state to occur), as an achievement (the state resulting from a previous accomplishment), as a punctual event (if we collapse the time interval of the erosion into a time point), or as a transition (something that has changed from a state to a different one).
In the erosion case, we could therefore have good motivations to shift from one aspect to another: a) causation focus, b) effectual focus, c) historical condensation, d) transition (causality).
The different views refer to the same event, but are still different: how to live with this seeming paradox?
A typical solution e.g. in linguistics (cf. Levin's aspectual classes) and in DOLCE Full (cf. WonderWeb D18 axiomatization) is to classify events based on aspectual differences. But this solution would create different identities for a same event, where the difference is only based on the modeller's attitude.
An alternative solution is suggested here, and exploits the notion of (observable) Situation; a Situation is a view, consistent with a Description, which can be observed of a set of entities. It can also be seen as a 'relational context' created by an observer on the basis of a 'frame'. Therefore, a Situation allows to create a context where each particular view can have a proper identity, while the Event preserves its own identity.
For example, ErosionAsAccomplishment is a Situation where rock erosion is observed as a process leading to a certain achievement: the conditions (roles, parameters) that suggest such view are stated in a Description, which acts as a 'theory of accomplishments'. Similarly, ErosionAsTransition is a Situation where rock erosion is observed as an event that has changed a state to another: the conditions for such interpretation are stated in a different Description, which acts as a 'theory of state transitions'.
Consider that in no case the actual event is changed or enriched in parts by the aspectual view.
(2) Alternative intentionality views
Similarly to aspectual views, several intentionality views can be provided for a same Event. For example, one can investigate if an avalanche has been caused by immediate natural forces, or if there is any hint of an intentional effort to activate those natural forces.
Also in this case, the Event as such has not different identities, while the causal analysis generates situations with different identities, according to what Description is taken for interpreting the Event.
On the other hand, if the possible actions of an Agent causing the starting of an avalanche are taken as parts of the Event, then this makes its identity change, because we are adding a part to it.
Therefore, if intentionality is a criterion to classify events or not, this depends on if an ontology designer wants to consider causality as a relevant dimension for events' identity.
(3) Alternative participant views
A slightly different case is when we consider the basic participants to an Event. In this case, the identity of the Event is affected by the participating objects, because it depends on them.
For example, if snow, mountain slopes, wind, waves, etc. are considered as an avalanche basic participants, or if we also want to add water, human agents, etc., that makes the identity of an avalanche change.
Anyway, this approach to event classification is based on the designer's choices, and more accurately mirrors lexical or commonsense classifications (see. e.g. WordNet 'supersenses' for verb synsets).
Ultimately, this discussion has no end, because realists will keep defending the idea that events in reality are not changed by the way we describe them, while constructivists will keep defending the idea that, whatever 'true reality' is about, it can't be modelled without the theoretical burden of how we observe and describe it.
Both positions are in principle valid, but, if taken too radically, they focus on issues that are only partly relevant to the aim of computational ontologies, which assist domain experts in representing a certain portion of reality according to their own assumptions and requirements.
For this reason, in this ontology version of DOLCE, both events and situations are allowed, together with descriptions (the reason for the inclusion of the D&S framewrok in DOLCE), in order to encode the modelling needs, independently from the position (if any) chosen by the model designer.
Event
Evento
EventType
A Concept that classifies an Event . An event type describes how an Event should be interpreted, executed, expected, seen, etc., according to the Description that the EventType isDefinedIn (or used in)
Event type
Tipo di evento
DUL:FormalEntity
FormalEntity
Entities that are formally defined and are considered independent from the social context in which they are used. They cannot be localized in space or time. Also called 'Platonic entities'.
Mathematical and logical entities are included in this class: sets, categories, tuples, costants, variables, etc.
Abstract formal entities are distinguished from information objects, which are supposed to be part of a social context, and are localized in space and time, therefore being (social) objects.
For example, the class 'Quark' is an abstract formal entity from the purely set-theoretical perspective, but it is an InformationObject from the viewpoint of ontology design, when e.g. implemented in a logical language like OWL.
Abstract formal entities are also distinguished from Concept(s), Collection(s), and Description(s), which are part of a social context, therefore being SocialObject(s) as well.
For example, the class 'Quark' is an abstract FormalEntity from the purely set-theoretical perspective, but it is a Concept within history of science and cultural dynamics.
These distinctions allow to represent two different notions of 'semantics': the first one is abstract and formal ('formal semantics'), and formallyInterprets symbols that are about entities whatsoever; for example, the term 'Quark' isAbout the Collection of all quarks, and that Collection isFormalGroundingFor the abstract class 'Quark' (in the extensional sense).
The second notion is social, localized in space-time ('social semantics'), and can be used to interpret entities in the intensional sense. For example, the Collection of all quarks isCoveredBy the Concept 'Quark', which is also expressed by the term 'Quark'.
Entità formale astratta
Formal entity
FunctionalSubstance
Functional substance
Goal
The Description of a Situation that is desired by an Agent, and usually associated to a Plan that describes how to actually achieve it
Goal
Scopo
Group
A CollectiveAgent whose acting agents conceptualize a same SocialRelation .
Group
Gruppo
InformationEntity
A piece of information, be it concretely realized or not. It is a catchall class, intended to bypass the ambiguities of many data or text that could denote either a an expression or a concrete realization of that expression.
In a semiotic model, there is no special reason to distinguish between them, however we may want to distinguish between a pure information content (e.g. the 3rd Gymnopedie by Satie), and its possible concrete realizations as a music sheet, a piano execution, the reproduction of the execution, its publishing as a record, etc.).
DUL:InformationObject
InformationObject
A piece of information, such as a musical composition, a text, a word, a picture, independently from how it is concretely realized.
Information object
Oggetto informativo
true
InformationRealization
A concrete realization of an InformationObject, e.g. the written document (object) containing the text of a law, a poetry reading (event), the dark timbre (quality) of a sound (event) in the execution (event) of a musical composition, realizing a 'misterioso' tempo indication.
The realization of an information object also realizes information about itself. This is a special semiotic feature, which allows to avoid a traditonal paradox, by which an information is often supposed to be about itself besides other entities (e.g. the information object 'carpe diem' is about its meaning in Horace's Odes (let alone its fortune in Western culture and beyond), but also about its expression in context: 'dum loquimur, fugerit invida aetas: carpe diem, quam minimum credula postero', with the sound and emotional relations that it could activate.
This is expressed in OWL2 with a local reflexivity axiom of the dul:InformationRealization class.
Information realization
Informazione concreta
LocalConcept
A Concept that isDefinedIn exactly 1 Description. For example, the Concept 'coffee' in a 'preparesCoffee' relation can be defined in that relation, and for all other Description(s) that use it, the isConceptUsedIn property should be applied. Notice therefore that not necessarily all Concept(s) isDefinedIn exactly 1 Description.
Local concept
DUL:Method
Method
dul:Method
A method is a Description that defines or uses concepts in order to guide carrying out actions aimed at a solution with respect to a problem.
It is different from a Plan, because plans could be carried out in order to follow a method, but a method can be followed by executing alternative plans.
Method
Metodo
Narrative
Narrative
DUL:NaturalPerson
NaturalPerson
A person in the physical commonsense intuition: 'have you seen that person walking down the street?'
Natural person
Persona fisica
Norm
A social norm.
Norm
Norma
DUL:Object
Object
dul:Object
Any physical, social, or mental object, or a substance. Following DOLCE Full, objects are always participating in some event (at least their own life), and are spatially located.
Object
Oggetto
ObjectAggregate
Aldo Gangemi
2021-02-21T23:35:02Z
An aggregate of distributed objects, members of a same Collection, e.g. the stars in a constellation, the parts of a car, the employees of a company, the entries from an encyclopedia, the concepts expressed in a speech, etc.
It cannot be defined by means of an equivalence axiom, because it'd require the same Collection for all members, an axiom that cannot be expressed in OWL.
Organism
A physical objects with biological characteristics, typically that organisms can self-reproduce.
Organism
Organismo
Organization
An internally structured, conventionally created SocialAgent, needing a specific Role and Agent that plays it, in order to act.
Un agente sociale strutturato internamente e creato convenzionalmente. Per agire, ha bisogno di ruoli e agenti che li ricoprano.
Organization
Organizzazione
Parameter
A Concept that classifies a Region; the difference between a Region and a Parameter is that regions represent sets of observable values, e.g. the height of a given building, while parameters represent constraints or selections on observable values, e.g. 'VeryHigh'. Therefore, parameters can also be used to constrain regions, e.g. VeryHigh on a subset of values of the Region Height applied to buildings, or to add an external selection criterion , such as measurement units, to regions, e.g. Meter on a subset of values from the Region Length applied to the Region Length applied to roads.
Parameter
Parametro
Parthood
Aldo Gangemi
2021-04-03T13:53:57Z
A special kind of Situation that allows to include time indexing for the hasPart relation in situations.
For example, if a Situation s 'finally, my bike has a luggage rack' isSettingFor the entity 'my bike' and the TimeIntervals 'now', or more specifically '29March2021', we need to have a time-index the part relation. With Parthood, we use includesWhole and includesPart properties.
This can be done similarly for other arguments of parthood, e.g. location, configuration, topology, etc.
Concerning the possible property characteristics reused from mereology (transitivity, asymmetry, reflexivity), they need to be implemented by means of rules (or, in a limited way, property chains using the binary hasPart or hasProperPart properties).
A key is also added to ensure identification constraints of time-indexed parthood.
parthood
Pattern
Any invariance detected from a dataset, or from observation; also, any invariance proposed based on top-down considerations.
E.g. patterns detected and abstracted by an organism, by pattern recognition algorithms, by machine learning techniques, etc.
An occurrence of a pattern is an 'observable', or detected Situation
Pattern
Person
Persons in commonsense intuition, which does not apparently distinguish between either natural or social persons.
Person
Persona {it}
Personification
A social entity with agentive features, but whose status is the result of a cultural transformation from e.g. a PhysicalObject, an Event, an Abstract, another SocialObject, etc. For example: the holy grail, deus ex machina, gods, magic wands, etc.
Personification
PhysicalAgent
A PhysicalObject that is capable of self-representing (conceptualizing) a Description in order to plan an Action.
A PhysicalAgent is a substrate for (actsFor) a Social Agent
Agente fisico
Physical agent
PhysicalArtifact
Any PhysicalObject that isDescribedBy a Plan .
This axiomatization is weak, but allows to talk of artifacts in a very general sense, i.e. including recycled objects, objects with an intentional functional change, natural objects that are given a certain function, even though they are not modified or structurally designed, etc. PhysicalArtifact(s) are not considered disjoint from PhysicalBody(s), in order to allow a dual classification when needed. E.g.,
FunctionalSubstance(s) are included here as well.
Immaterial (non-physical) artifacts (e.g. texts, ideas, cultural movements, corporations, communities, etc. can be modelled as social objects (see SocialObject), which are all 'artifactual' in the weak sense assumed here.
Artefatto fisico
Physical artifact
PhysicalAttribute
Physical value of a physical object, e.g. density, color, etc.
Caratteristica fisica
Physical attribute
PhysicalBody
Physical bodies are PhysicalObject(s), for which we tend to neutralize any possible artifactual character. They can have several granularity levels: geological, chemical, physical, biological, etc.
Physical body
PhysicalObject
Any Object that has a proper space region. The prototypical physical object has also an associated mass, but the nature of its mass can greatly vary based on the epistemological status of the object (scientifically measured, subjectively possible, imaginary).
Oggetto fisico
Physical object
PhysicalPlace
A physical object that is inherently located; for example, a water area.
Luogo fisico
Physical place
1
Place
Socially or cognitively dependent locations: political geographic entities (Rome, Lesotho), and non-material locations determined by the presence of other entities ("the area close to Rome") or of pivot events or signs ("the area where the helicopter fell"), as well as identified as complements to other entities ("the area under the table"), etc.
In this generic sense, a Place is a 'dependent' location. For 'non-dependent' locations, cf. the PhysicalPlace class. For an abstract (dimensional) location, cf. the SpaceRegion class.
Luogo
Place
Plan
A Description having an explicit Goal, to be achieved by executing the plan
Piano
Plan
PlanExecution
Plan executions are situations that proactively satisfy a plan. Subplan executions are proper parts of the whole plan execution.
Esecuzione di piano
Plan execution
Process
This is a placeholder for events that are considered in their evolution, or anyway not strictly dependent on agents, tasks, and plans.
See Event class for some thoughts on classifying events. See also 'Transition'.
Process
Processo
Project
A Plan that defines Role(s), Task(s), and a specific structure for tasks to be executed in relation to goals to be achieved, in order to achieve the main goal of the project. In other words, a project is a plan with a subgoal structure and multiple roles and tasks.
Progetto
Project
Quality
dul:Quality
Any aspect of an Entity (but not a part of it), which cannot exist without that Entity. For example, the way the surface of a specific PhysicalObject looks like, or the specific light of a place at a certain time, are examples of Quality, while the encoding of a Quality into e.g. a PhysicalAttribute should be modeled as a Region.
From the design viewpoint, the Quality-Region distinction is useful only when individual aspects of an Entity are considered in a domain of discourse.
For example, in an automotive context, it would be irrelevant to consider the aspects of car windows for a specific car, unless the factory wants to check a specific window against design parameters (anomaly detection).
On the other hand, in an antiques context, the individual aspects for a specific piece of furniture are a major focus of attention, and may constitute the actual added value, because the design parameters for old furniture are often not fixed, and may not be viewed as 'anomalies'.
Quality
QualitÃ
Region
dul:Region
Any region in a dimensional space (a dimensional space is a maximal Region), which can be used as a value for a quality of an Entity . For example, TimeInterval, SpaceRegion, PhysicalAttribute, Amount, SocialAttribute are all subclasses of Region.
Regions are not data values in the ordinary knowledge representation sense; in order to get patterns for modelling data, see the properties: representsDataValue and hasDataValue
Region
Regione
Relation
dul:Relation
Relations are descriptions that can be considered as the counterpart of formal relations (that are included in the FormalEntity class).
For example, 'givingGrantToInstitution(x,y,z)' with three argument types: Provider(x),Grant(y),Recipient(z), can have a Relation counterpart: 'GivingGrantToInstitution', which defines three Concept instances: Provider,Grant,Recipient.
Since social objects are not formal entities, Relation includes here any 'relation-like' entity in common sense, including social relations.
Relation
Relazione
2
1
Right
A legal position by which an Agent is entitled to obtain something from another Agent , under specified circumstances, through an enforcement explicited either in a Law, Contract , etc.
Diritto
Right
DUL:Role
Role
A Concept that classifies an Object
Role
Ruolo
Set
Insieme {it}
Set
DUL:Situation
Situation
A view, consistent with ('satisfying') a Description, on a set of entities.
It can also be seen as a 'relational context' created by an observer on the basis of a 'frame' (i.e. a Description).
For example, a PlanExecution is a context including some actions executed by agents according to certain parameters and expected tasks to be achieved from a Plan; a DiagnosedSituation is a context of observed entities that is interpreted on the basis of a Diagnosis, etc.
Situation is also able to represent reified n-ary relations, where isSettingFor is the top-level relation for all binary projections of the n-ary relation.
If used in a transformation pattern for n-ary relations, the designer should take care of adding (some or all) OWL2 keys, corresponding to binary projections of the n-ary, to a subclass of Situation. Otherwise the 'identification constraint' (Calvanese et al., IJCAI 2001) might be violated.
Situation
Situazione
SocialAgent
Any individual whose existence is granted simply by its social communicability and capability of action (through some PhysicalAgent).
Agente sociale
Social agent
SocialObject
Any Object that exists only within some communication Event, in which at least one PhysicalObject participates in.
In other words, all objects that have been or are created in the process of social communication: for the sake of communication (InformationObject), for incorporating new individuals (SocialAgent, Place), for contextualizing or intepreting existing entities (Description, Concept), or for collecting existing entities (Collection).
Being dependent on communication, all social objects need to be expressed by some information object (information objects are self-expressing).
Oggetto sociale
Social object
SocialObjectAttribute
Any Region in a dimensional space that is used to represent some characteristic of a SocialObject, e.g. judgment values, social scalars, statistical attributes over a collection of entities, etc.
Caratteristica sociale
Social attribute
1
SocialPerson
A SocialAgent that needs the existence of a specific NaturalPerson in order to act (but the lifetime of the NaturalPerson has only to overlap that of the SocialPerson).
Persona sociale
Social person
Formerly: Person (changed to avoid confusion with commonsense intuition)
SocialRelation
Any social relationship
Relazione sociale
Social relation
SpaceRegion
Any Region in a dimensional space that is used to localize an Entity ; i.e., it is not used to represent some characteristic (e.g. it excludes time intervals, colors, size values, judgment values, etc.). Differently from a Place , a space region has a specific dimensional space.
Regione di spazio
Space region
DUL:SpatioTemporalRegion
SpatioTemporalRegion
Substance
Any PhysicalBody that has not necessarily specified (designed) boundaries, e.g. a pile of trash, some sand, etc.
In this sense, an artistic object made of trash or a dose of medicine in the form of a pill would be a FunctionalSubstance, and a DesignedArtifact, since its boundaries are specified by a Design; aleatoric objects that are outcomes of an artistic process might be still considered DesignedArtifact(s), and Substance(s).
Sostanza
Substance
Task
An EventType that classifies an Action to be executed.
For example, reaching a destination is a task that can be executed by performing certain actions, e.g. driving a car, buying a train ticket, etc.
The actions to execute a task can also be organized according to a Plan that is not the same as the one that defines the task (if any).
For example, reaching a destination could be defined by a plan to get on holidays, while the plan to execute the task can consist of putting some travels into a sequence.
Task
Task
Theory
A Theory is a Description that represents a set of assumptions for describing something, usually general. Scientific, philosophical, and commonsense theories can be included here.
This class can also be used to act as 'naturalized reifications' of logical theories (of course, they will be necessarily incomplete in this case, because second-order entities are represented as first-order ones).
Teoria
Theory
TimeIndexedRelation
Aldo Gangemi
2021-02-24T14:24:23Z
A Situation that includes a time indexing in its setting, so allowing to order any binary relation (property) with time.
TimeInterval
Any Region in a dimensional space that aims at representing time.
Intervallo di tempo
Time interval
3
2
Transition
A transition is a Situation that creates a context for three TimeInterval(s), two additional different Situation(s), one Event, one Process, and at least one Object: the Event is observed as the cause for the transition, one Situation is the state before the transition, the second Situation is the state after the transition, the Process is the invariance under some different transitions (including the one represented here), in which at least one Object is situated. Finally, the time intervals position the situations and the transitional event in time.
This class of situations partly encodes the ontology underlying typical engineering algebras for processes, e.g. Petri Nets.
A full representation of the transition ontology is outside the expressivity of OWL, because we would need qualified cardinality restrictions, coreference, property equivalence, and property composition.
Transition
Transizione
TypeCollection
A Collection whose members are the maximal set of individuals that share the same (named) type, e.g. "the gem stones", "the Italians".
This class is very useful to apply a variety of the so-called "ClassesAsValues" design pattern, when it is used to talk about the extensional aspect of a class. An alternative variety of the pattern applies to the intensional aspect of a class, and the class Concept should be used instead.
Collezione di un tipo
Type collection
UnitOfMeasure
Units of measure are conceptualized here as parameters on regions, which can be valued as datatype values.
Unit of measure
Unità di misura
Workflow
A Plan that defines Role(s), Task(s), and a specific structure for tasks to be executed, usually supporting the work of an Organization
Workflow
Workflow
WorkflowExecution
Esecuzione di workflow
Workflow execution
rdf:List
owl:Thing
skos:Collection
Collection
A meaningful collection of concepts.
Labelled collections can be used where you would like a set of concepts to be displayed under a 'node label' in the hierarchy.
skos:Concept
Concept
An idea or notion; a unit of thought.
skos:ConceptScheme
Concept Scheme
A set of concepts, optionally including statements about semantic relationships between those concepts.
Thesauri, classification schemes, subject heading lists, taxonomies, 'folksonomies', and other types of controlled vocabulary are all examples of concept schemes. Concept schemes are also embedded in glossaries and terminologies.
A concept scheme may be defined to include concepts from different sources.
skos:OrderedCollection
Ordered Collection
An ordered collection of concepts, where both the grouping and the ordering are meaningful.
Ordered collections can be used where you would like a set of concepts to be displayed in a specific order, and optionally under a 'node label'.
time:TemporalEntity
sosa:ActuatableProperty
An actuatable quality (property, characteristic) of a FeatureOfInterest.
Actuatable Property
An actuatable quality (property, characteristic) of a FeatureOfInterest.
A window actuator acts by changing the state between a frame and a window. The ability of the window to be opened and closed is its ActuatableProperty.
sosa:Actuation
An Actuation carries out an (Actuation) Procedure to change the state of the world using an Actuator.
Actuation
An Actuation carries out an (Actuation) Procedure to change the state of the world using an Actuator.
The activity of automatically closing a window if the temperature in a room drops below 20 degree Celsius. The activity is the Actuation and the device that closes the window is the Actuator. The Procedure is the rule, plan, or specification that defines the conditions that triggers the Actuation, here a drop in temperature.
sosa:Actuator
A device that is used by, or implements, an (Actuation) Procedure that changes the state of the world.
Actuator
A device that is used by, or implements, an (Actuation) Procedure that changes the state of the world.
A window actuator for automatic window control, i.e., opening or closing the window.
sosa:FeatureOfInterest
The thing whose property is being estimated or calculated in the course of an Observation to arrive at a Result or whose property is being manipulated by an Actuator, or which is being sampled or transformed in an act of Sampling.
Feature Of Interest
The thing whose property is being estimated or calculated in the course of an Observation to arrive at a Result or whose property is being manipulated by an Actuator, or which is being sampled or transformed in an act of Sampling.
When measuring the height of a tree, the height is the observed ObservableProperty, 20m may be the Result of the Observation, and the tree is the FeatureOfInterest. A window is a FeatureOfInterest for an automatic window control Actuator.
sosa:ObservableProperty
An observable quality (property, characteristic) of a FeatureOfInterest.
Observable Property
An observable quality (property, characteristic) of a FeatureOfInterest.
The height of a tree, the depth of a water body, or the temperature of a surface are examples of observable properties, while the value of a classic car is not (directly) observable but asserted.
sosa:Observation
Act of carrying out an (Observation) Procedure to estimate or calculate a value of a property of a FeatureOfInterest. Links to a Sensor to describe what made the Observation and how; links to an ObservableProperty to describe what the result is an estimate of, and to a FeatureOfInterest to detail what that property was associated with.
Observation
Act of carrying out an (Observation) Procedure to estimate or calculate a value of a property of a FeatureOfInterest. Links to a Sensor to describe what made the Observation and how; links to an ObservableProperty to describe what the result is an estimate of, and to a FeatureOfInterest to detail what that property was associated with.
The activity of estimating the intensity of an Earthquake using the Mercalli intensity scale is an Observation as is measuring the moment magnitude, i.e., the energy released by said earthquake.
sosa:Platform
A Platform is an entity that hosts other entities, particularly Sensors, Actuators, Samplers, and other Platforms.
Platform
A Platform is an entity that hosts other entities, particularly Sensors, Actuators, Samplers, and other Platforms.
A post, buoy, vehicle, ship, aircraft, satellite, cell-phone, human or animal may act as platforms for (technical or biological) sensors or actuators.
sosa:Procedure
A workflow, protocol, plan, algorithm, or computational method specifying how to make an Observation, create a Sample, or make a change to the state of the world (via an Actuator). A Procedure is re-usable, and might be involved in many Observations, Samplings, or Actuations. It explains the steps to be carried out to arrive at reproducible results.
Procedure
A workflow, protocol, plan, algorithm, or computational method specifying how to make an Observation, create a Sample, or make a change to the state of the world (via an Actuator). A Procedure is re-usable, and might be involved in many Observations, Samplings, or Actuations. It explains the steps to be carried out to arrive at reproducible results.
The measured wind speed differs depending on the height of the sensor above the surface, e.g., due to friction. Consequently, procedures for measuring wind speed define a standard height for anemometers above ground, typically 10m for meteorological measures and 2m in Agrometeorology. This definition of height, sensor placement, and so forth are defined by the Procedure.
Many observations may be created via the same Procedure, the same way as many tables are assembled using the same instructions (as information objects, not their concrete realization).
sosa:Result
The Result of an Observation, Actuation, or act of Sampling. To store an observation's simple result value one can use the hasSimpleResult property.
Result
The Result of an Observation, Actuation, or act of Sampling. To store an observation's simple result value one can use the hasSimpleResult property.
The value 20 as the height of a certain tree together with the unit, e.g., Meter.
sosa:Sample
A Sample is the result from an act of Sampling.
Feature which is intended to be representative of a FeatureOfInterest on which Observations may be made.
Physical samples are sometimes known as 'specimens'.
Samples are artifacts of an observational strategy, and have no significant function outside of their role in the observation process. The characteristics of the samples themselves are of little interest, except perhaps to the manager of a sampling campaign.
A Sample is intended to sample some FatureOfInterest, so there is an expectation of at least one isSampleOf property. However, in some cases the identity, and even the exact type, of the sampled feature may not be known when observations are made using the sampling features.
Sample
Feature which is intended to be representative of a FeatureOfInterest on which Observations may be made.
A 'station' is essentially an identifiable locality where a sensor system or Procedure may be deployed and an observation made. In the context of the observation model, it connotes the 'world in the vicinity of the station', so the observed properties relate to the physical medium at the station, and not to any physical artifact such as a mooring, buoy, benchmark, monument, well, etc.
A statistical sample is often designed to be characteristic of an entire population, so that observations can be made regarding the sample that provide a good estimate of the properties of the population.
A transient sample, such as a ships-track or flight-line, might be identified and described, but is unlikely to be revisited exactly.
sosa:Sampler
A device that is used by, or implements, a Sampling Procedure to create or transform one or more samples.
Sampler
A device that is used by, or implements, a Sampling Procedure to create or transform one or more samples.
A ball mill, diamond drill, hammer, hypodermic syringe and needle, image Sensor or a soil auger can all act as sampling devices (i.e., be Samplers). However, sometimes the distinction between the Sampler and the Sensor is not evident, as they are packaged as a unit. A Sampler need not be a physical device.
sosa:Sampling
An act of Sampling carries out a sampling Procedure to create or transform one or more samples.
Sampling
An act of Sampling carries out a sampling Procedure to create or transform one or more samples.
Crushing a rock sample in a ball mill.
Digging a pit through a soil sequence.
Dividing a field site into quadrants.
Drawing blood from a patient.
Drilling an observation well.
Establishing a station for environmental monitoring.
Registering an image of the landscape.
Selecting a subset of a population.
Sieving a powder to separate the subset finer than 100-mesh.
Splitting a piece of drill-core to create two new samples.
Taking a diamond-drill core from a rock outcrop.
sosa:Sensor
Device, agent (including humans), or software (simulation) involved in, or implementing, a Procedure. Sensors respond to a stimulus, e.g., a change in the environment, or input data composed from the results of prior Observations, and generate a Result. Sensors can be hosted by Platforms.
Sensor
Device, agent (including humans), or software (simulation) involved in, or implementing, a Procedure. Sensors respond to a stimulus, e.g., a change in the environment, or input data composed from the results of prior Observations, and generate a Result. Sensors can be hosted by Platforms.
Accelerometers, gyroscopes, barometers, magnetometers, and so forth are Sensors that are typically mounted on a modern smart phone (which acts as Platform). Other examples of sensors include the human eyes.
sr:RelationshipNature
Nature of relationship (between samples)
Members of this class indicate the nature of a relationship between two samples
Adjacent flight-line
Females
Juveniles
Males
Pixel within image or scene
Probe spot on polished specimen
Specimen taken from borehole
Split of core sample
Station along a traverse
Sub-sample with grain size smaller than specified seive mesh
sr:SampleRelationship
Sample relationship
Members of this class represent a relationship between a sample and another
ssn:Deployment
ssn:Property
ssn:Stimulus
ssn:System
ssn-system:Frequency
ssn-system:SystemCapability
ssn-system:SystemProperty
foaf:Agent
1
1
bci:AccessMethod
Observations
RecordedData_(SOSA-SSN).png
=== ** definition ** "(a network communication protocol and its parameters)" (*) The identity scheme of the data access. (*) The security scheme of the protocol. ** scopeNote ** This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe the network communication protocols that BCI applications can use to access and retrieve the RecordedData. This ontology defines some of the aforementioned network communication protocols, commonly used by BCI applications. ** editorialNote ** This ontology does not define in undefined-general-purpose datatype properties to capture the following corresponding definitions: (*) For the identity scheme of the data access: hasIdentity. (*) For the security scheme of the protocol: hasSecurity and hasQoS. ===
2016-07-06T04:34:00
AccessMethod
Status: *STABLE*
This concept captures any computer network mechanism (network communication protocol) through which the RecordedData can be accessed. An AccessMethod represents any specific standard communication protocol, such as: MQTT, MQTT-SN, HTTP, CoAP, FTP, etc. For the purpose of this ontology, an AccessMethod captures only the following components: (*) Nature of the data access: stream. (*) The locator (address) of the data.
The concept will be merged into the generic abstraction of container under the oneM2M spec, and made compatible with its emerging semantic extension. A container represents a "data collection" and, therefore, it's directly related to the RecordedData concept.
The publish/subscribe mechanism is one of the preferred implementations for BCI applications.
BCI data access method
%APPLICATION%@cerebratek_nupod
File
bci:AccessMethod.CoAP
Observations
RecordedData_(SOSA-SSN).png
2016-07-05T04:35:00
AccessMethod.CoAP
--This ontology should be agnostic to the data access-- $ 04:35 AM 2016-07-05 $
true
Status: *STABLE*
Represents a CoAP AccessMethod. Similarly as HTTP, this software protocol supports IRI and content-type negotiation.
CoAP access method
Stream
1
1
1
bci:AccessMethod.MQTT
Observations
RecordedData_(SOSA-SSN).png
2016-07-05T04:35:00
AccessMethod.MQTT
--This ontology should be agnostic to the data access-- $ 04:35 AM 2016-07-05 $
true
Status: *STABLE*
Represents a MQTT AccessMethod, which is a machine-to-machine (M2M)/"Internet of Things" connectivity protocol. MQTT was designed as an extremely lightweight publish/subscribe messaging transport. This concept describes the corresponding definition of access parameters needed for a suitable MQTT connection.
MQTT Access Method
File
bci:AccessMethod.RESTful-JSON
Observations
RecordedData_(SOSA-SSN).png
2016-07-05T04:35:00
AccessMethod.RESTful-JSON
--This ontology should be agnostic to the data access-- $ 04:35 AM 2016-07-05 $
true
Status: *STABLE*
Represents a RESTful AccessMethod, where the data is exchanged in JSON format.
RESTful-JSON access method
1
bci:Action
Session,Subject,Context
Activity.png
Context.png
Subject.png
2018-06-05T23:32:00
Action
Status: *STABLE*
Describes a type of Context.Event issued (issues) by a Subject while performing a specific Activity in a Session. Actions are considered to be structural components of an Activity, which are done by the Subject while interacting with the Context. As an interaction event, an Action can register many PlayoutInstant.SubjectActions in a Playout.
The concept Action represents a special type of Context.Event with the sole purpose of identifying the set of related events that a Subject (only for subjects) can issue while performing an Activity during the interaction with the Context.
action
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any relevant Action classification that Subjects can do, while performing an Activity.
1
0
bci:Activity
Session,Subject
Activity.png
2018-05-11T23:42:00
Activity
Status: *STABLE*
Activity represents the Subject's physical state while interacting with the Context during a Session. This concept describes an Activity performed by the Subject on a specific Session while interacting with a Context. BCI applications monitor the Subject's physical state during the Sessions. This concept identifies the type of Activity that the Subject is performing while recording the data in a Session. Hence, each Session associates a single Subject interactions with a single Context while performing a single Activity. An Activity can be break down as a set of Actions, performed by the Subject while interacting with the Context.
(*) The concept of Activity is agnostic regarding the number of Subjects engaging in an individual Activity. The ontology clearly defines that the connection between Subjects and Activity(ies) is through Sessions: one Session associates one Subject performing one Activity while interacting with one Context. (*) BCI applications can use this concept as a way to annotate/mark (Marker) the Records (DataSegments).
Relationship between Activity and Aspect: (*) Commonly, BCI applications are designed to analyze how an Activity influences an Aspect: it's part of the research scheme and purpose of a BCI application. However, the BCI-O model allows an Activity to be linked with, possibly, multiple Aspects through its Session (a session connects to one activity) and its Records (a session has multiple records, and each record has its own aspect). (*) From the perspective of a BCI application, an Activity has a "main" Aspect to analyze; i.e., multiple Records of the same Session connect to the same Aspect. (*) A SPARQL triple pattern matching that connects Activity(ies) to Aspects would be: ?Session hasActivity ?Activity ?Session hasRecord ?Record ?Record bci:aspectOfInterest ?Aspect
Some subclasses of this concept could be: (*) Glaucoma Tracking: This Activity type is a common example for "pre-screening" of Subjects. (*) Learning: BCI applications can apply different Stimuli (StimulusEvent) to the Subjects. This Activity type is a common example for "interactive" observations. (*) Sleeping: BCI applications don't apply any kind of Stimuli to the Subjects (there are no StimulusEvent). This Activity type is a common example for "running" observations: continuous observations.
activity
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any relevant Activity classification that Subjects can engage on in Sessions.
1
1
1
bci:Actuation
Actuation
Actuation.png
2018-05-08T18:19:00
[SSN], [Seydoux2016]
Actuation
Status: *STABLE*
[SSN] Carries out an (actuation) procedure to change the state of the world using an Actuator. The relationships from and to Actuation and other concepts are the ones defined at [SSN]. For this ontology, the following sosa:Actuation qualified restricted properties with exact cardinality type restriction are of importance for the definition of Actuation: (*) sosa:madeByActuator EXACTLY 1 (sosa:Actuator): restricts the association to exactly 1 Actuator. (*) sosa:hasFeatureOfInterest EXACTLY 1 (sosa:FeatureOfInterest): restricts the association to exactly 1 ActuationTarget.
See general remark about: 2_MAPPINGS-TO-SAN
See general remark about: PROCEDURES
%GENERAL_EXAMPLE%@Actuation-Use-Case
actuation
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any relevant Actuation classification that Subjects can engage on in Sessions.
1
bci:ActuationEvent
Actuation,Context
Actuation.png
2018-05-08T18:53:00
[Seydoux2016]
ActuationEvent
Status: *STABLE*
Represents a transition (something that has changed from a state to a different one: ActuationTarget) ─a modification (ImpactedProperty)─ in the Context as the result of an actuation (ActuationResult involves ActuationEvent). From the Context perspective, this concept is a Context.Event (triggers(-ed) by an Actuator) that changes the ImpactedProperty of the ActuationTarget. [Seydoux2016] Following the Actuation-Actuator-Effect (AAE) design pattern proposed for the IoT Ontology (IoT-O), this concept is taken from the following relationships involving the san:Effect definition: [san:Actuator] ------ (triggers) ------ [san:Effect] [san:Actuation] ------ (involves) ------ [san:Effect] [san:Effect] -------- (impacts) ------ [ImpactedProperty]
If needed, BCI applications can time stamp an ActuationEvent.
See general remark about: 2_MAPPINGS-TO-SAN
san:Effect is defined in [SAN] as: "Concept bound to the definition of an actuator as an agent having an effect on the physical world. Therefore, an effect is any kind of physical modification induced by an actuator." In order to be more semantically precise, and based on the <a title="Semantic Sensor Network Ontology | W3C Recommendation (19 October 2017): (6) Vertical Segmentation -- (6.1) Dolce-Ultralite Alignment Module SOSA/SSN definitions aligned with Dolce-Ultralite (DUL), the concept san:Effect is described distinctively by the following combined ontological notions: (*) A happening that impacts a quality (DUL:Quality), or property (ssn:Property), with the capability of an Actuation to act on it (sosa:actsOnProperty), that is, a type of sosa:ActuatableProperty; the ImpactedProperty. (*) An event (DUL:Event) induced by (triggers) an Actuator, that modifies (changes) the physical world (ActuationTarget): a type of Context.Event; the ActuationEvent. (*) (An effect is seeing as...) Any kind (of an ImpactedProperty) of physical modification (ActuationEvent changes ActuationTarget) as the result of an actuation (ActuationResult involves ActuationEvent).
%GENERAL_EXAMPLE%@Actuation-Use-Case
actuation event
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any relevant ActuationEvent classification.
1
bci:ActuationResult
Actuation,Results
Actuation.png
2018-05-08T18:53:00
[SSN], [Seydoux2016]
ActuationResult
Status: *STABLE*
[SSN] It represents the result of an Actuation, i.e. an entity representing the "effect" of the Actuation, which involves an ActuationEvent. [Seydoux2016] Following the Actuation-Actuator-Effect (AAE) design pattern proposed for the IoT Ontology (IoT-O), this concept expands the following relationship: [Actuation] ------ (involves) ------ [Effect]
If needed, BCI applications can time stamp an ActuationResult.
See general remark about: 2_MAPPINGS-TO-SAN
%GENERAL_EXAMPLE%@Actuation-Use-Case
actuation result
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any relevant ActuationResult classification.
bci:ActuationTarget
Actuation,Context
Actuation.png
2018-03-23T04:35:00
[SSN], [Seydoux2016]
ActuationTarget
Status: *STABLE*
The following concepts encompass its modeling depiction: (*) [SSN] A sosa:FeatureOfInterest: the thing (ActuationTarget) whose property (ImpactedProperty) is being manipulated by an Actuator. (*) [SSN] Related from an sosa:Actuation via the property sosa:hasFeatureOfInterest: a relation between an Actuation and the entity (ActuationTarget) whose property (ImpactedProperty) was modified. (*) A Context.Object: a thing (object) in the interaction Context of the Session. [Seydoux2016] Following the Actuation-Actuator-Effect (AAE) design pattern proposed for the IoT Ontology (IoT-O), this concept captures the definition of FeatureOfInterest from the following relation: [ImpactedProperty] ------ (is property of) ------ [FeatureOfInterest]
%GENERAL_EXAMPLE%@Actuation-Use-Case
[SSN] A window is an ActuationTarget (sosa:FeatureOfInterest) for an automatic window control Actuator.
actuation target
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any relevant ActuationTarget classification that Subjects interacts with (changes its state) in Sessions.
1
1
1
bci:Actuator
Actuation
Actuation.png
2018-05-08T18:39:14
[SSN], [Seydoux2016]
Actuator
Status: *STABLE*
[SSN] A device that is used by, or implements, an (actuation) procedure that changes (triggers) the state of the world (ActuationTarget).
(*) "An actuator is a component of a machine that is responsible for moving or controlling a mechanism or system.". (*) "An actuator requires a control signal and a source of energy: [ControlSignal]". (*) "An actuator is the mechanism by which a control system *acts* upon an environment.". Reference: [Wikipedia: Actuator]
See general remark about: 2_MAPPINGS-TO-SAN
[Seydoux2016] Actuators are devices that transform an input signal into a physical output, making them the exact opposite of sensors. SAN (ontology) is built around Actuation-Actuator-Effect (AAE), a design pattern inspired from the Stimulus Sensor Observation (SSO). Based on the focus of this ontology and due that actuators are the exact opposite of sensors, the concepts of channel for Actuators, are not defined. If needed, BCI applications can model the system capabilities of an actuator following directly SOSA's structure and concepts.
%GENERAL_EXAMPLE%@Actuation-Use-Case
actuator
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any relevant Actuator classification that Subjects can use in Sessions.
0
bci:ActuatorSpec
Descriptor,Actuation
Actuation.png
Descriptor_(SOSA-SSN).png
2017-12-11T03:41:00
ActuatorSpec
Status: *STABLE*
An ActuatorSpec is an information object that describes specific properties (such as: hardware specs, power used, types of interfaces, etc.) of an Actuator. Similarly to DeviceSpec, the structure of ActuatorSpec has been modeled as a composite object so that it can be composed as a set of ActuatorSpecs to describe specific parts of an Actuator. In this way, an ActuatorSpec is considered as a bag of descriptive properties about the Actuator. An ActuatorSpec is a specialized Descriptor. An ActuatorSpec can be used to record any descriptive information related to the physical actuator component.
This ontology does not define any information object in particular of an ActuatorSpec.
actuator specification
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe specific types of ActuatorSpec.
1
bci:Aspect
Observations
Aspect-and-Modality_(SOSA-SSN).png
2017-08-20T19:50:00
[SSN], [Compton2009]
Aspect
Status: *STABLE*
[SSN] It's the classification of sosa:FeatureOfInterest in the course of an sosa:Observation for BCI Activities. This concept captures the view or interpretation for the Records, and thus, it defines the purpose and/or scope of BCI Activities observations.
See general remark about: ASPECT-and-MODALITY
The following descriptions capture the definition of this concept ([oldSSN: FeatureOfInterest] and [Compton2009]) adjusted to this ontology: (*) An Aspect is an abstraction of BCI activities performed by humans, from the human body's state perspective. (*) Devices observe physiological signals (Modality-ies) of Aspects: for example, the EEG signals (Modality) of an emotion (Aspect). (*) Aspects are human body's states that are the target of sensing.
aspect
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe the Aspects of the Records. Three main Aspects are defined in this ontology.
0
1
bci:Channel
SystemCapabilities
MeasurementCapability_(SOSA-SSN).png
2017-08-30T19:51:00
[SSN], [XDF], [ESS]
Channel
Status: *STABLE*
A Channel is a relevant metadata set that defines a logical component schema of a DeviceChannelingSpec's data structure model. A Channel is defined as a specialized ssn-system:SystemCapability type that describes a compounded set of measurement properties (ssn-system:SystemProperty-ies), as explained in [SSN] System Capabilities Module and [oldSSN] MeasuringCapability Module. A Channel is associated to a DeviceChannelingSpec definition and, hence, to a RecordChannelingSpec. As part of a channeling spec, a Channel definition can be extended to incorporate contextual metadata semantics (i.e., properties to describe dimensional characteristics regarding what, when, how --including mathematical formulas for calculations--, where, why, etc.), depending on the DataFormat used for the data files. This information can be associated to a channel definition via a Descriptor set.
A simplistic notion regarding the relationship and difference between the concepts of Channel and DataFormat, is depicted in the following example: If the used DataFormat were "CSV", then the channeling schema (DeviceChannelingSpec) would define the data's logical structure: [Channeling Schema] = { Col1: ID, Col2: Date, Col3: Name, ... }, where each column represents a specific Channel definition. Note that each column has its own related metadata and attributes; also, it follows its proper structure, format or notation scheme, etc.
BCI systems naturally collects and transmits data from a sender (transmitter machine) to a receiver (receiving machine). In a general and abstract way, a channel is used to convey an information signal, for example a digital bit stream, from one or several senders (or transmitters) to one or several receivers. The concept of channel defined in this ontology aims to capture a relevant metadata set that describes the measurement properties for any type of channel.
Channel Structure: The channel structure is composed of different related metadata and attributes. It varies widely depending on the following: (*) the related/associated Modality. (*) the functional "role" that plays in a Device's data model, and, (*) the way how it's used in a specific Record settings. Due that a channeling spec is directly associated with a Device and a Record, in theory, a channel defines a specific (or proper) logical data structure component for a Device and/or a Record. The most common metadata and attributes of a channel structure, regardless of its nature, are: (*) Label: defined as hasLabel. (*) Type: defined as the channel class type (class hierarchy). (*) Placement (or Location): refers to the attribute set that define its placement on a Subject. The placement's attribute structure varies widely depending on the channel's nature.
Channels are the logical components of Records. Their structural relationship resembles a matrix, in the following way: [ [[[ [ [ [ [[rows] Channels are different data rows.] ] [ [-->] [ ][ ] [ ][ ] ] [ [-->] [ ][ ] [ ][ ] ] [ [-->] [ ][ ] [ ][ ] ] [ [-->] [ ][ ] [ ][ ] ] ] ] ][ [ [ [ [[columns] Data samples are different data columns. They correspond to specific time instances.] ] [ [ -V- ][ -V- ] [ -V- ][ -V- ] [ -V- ] ] [ [ ] [ ][ ] [ ][ ] ] [ [ ] [ ][ ] [ ][ ] ] [ [ ] [ ][ ] [ ][ ] ] ] ] ]]] ] Hence, Channels can point to specific parts of a Record.
See general remark about: UNITS-OF-MEASUREMENT
(*) [XDF] Eye-Gaze Channel: channeling metadata for an Eye-Gaze Record. (*) CoordinateSystem = { World-Space, Object-Space, Camera-Space, or Image-Space }: coordinate system of the respective parameter. (*) RefersTo = { Left, Right, or Both }: which eye the channel is referring to. (*) Type: Type of data in this channel. It can be any of the following values: (*) { ScreenX, ScreenY }: screen coordinates of the gaze cursor (can also refer to a scene image); usually in pixels. (*) { DirectionX, DirectionY, DirectionZ }: 3D gaze vector in some coordinate system. (*) { PositionX, PositionY, PositionZ }: 3D position of the eye center in some coordinate system. (*) { IntersectionX, IntersectionY, IntersectionZ }: 2D or 3D position of the intersection point with a plane (in some coordinate system). (*) { HeadX, HeadY, HeadZ }: 3D location of the head center in some coordinate system. (*) { PupilX, PupilY, PupilZ }: 2D or 3D location of the pupil center in some coordinate system. (*) { ReflexX, ReflexY, ReflexZ }: 2D or 3D location of the illuminator's reflection point in some coordinate system. (*) { Radius or Diameter }: the overall pupil radius or diameter (usually in mm or pixels). (*) { RadiusX, RadiusY }: horizontal and vertical pupil radius. (*) { DiameterX, DiameterY }: horizontal and vertical pupil diameter. (*) { Confidence }: for confidence information (preferred unit: normalized). (*) { FrameNumber }: frame number that the parameters were calculated from. (*) { PlaneNumber or ObjectId }: number or identifier of the object that was intersected by the gaze vector. (*) Keyboard-Hit Channel: channeling metadata for a Keyboard-Hit Record. (*) [XDF] Hand-Gesture Channel: channeling metadata for a Hand-Gesture Record. (*) Type = { Confidence, OrientationH, OrientationP, OrientationR, PositionX, PositionY, PositionZ }: type of data. [following types from GazeMetaData "LeapMotion_xml_output" definitions.] (*) Mouse-Click Channel: channeling metadata for a Mouse-Click Record. (*) Type = { PositionX, PositionY }. (*) Button = { Left, Other, Right, Wheel }.
channeling data (logical component)
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe specialized channel definitions related to specific Modality types.
bci:ChannelingSpec
Descriptor,Observations
Aspect-and-Modality_(SOSA-SSN).png
Descriptor_(SOSA-SSN).png
2016-08-08T23:54:00
ChannelingSpec
Status: *STABLE*
Each Modality defines its own specific channeling schema information: a complete, generic and descriptive set of all possible Channels and their extended metadata attributes that defines the data structure model and template of the Modality. A ChannelingSpec captures the complete description of the channeling schema information, in a form of an external document specification (outside the metadata repository). Similar to the DeviceSpec concept, a ChannelingSpec is a specialized Descriptor.
A channeling schema information related to all kind of EegRecords would define around 32 fields (i.e., EegChannels) to describe a complete data structure regarding EEG data. The full specification for this channeling schema information would be associated with the generic EegModality concept. A proper name for this spec would be EegChannelingSpec.
channeling schema spec
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any relevant type of a ChannelingSpec to capture the external information that defines the complete channeling schema information of a Modality.
bci:CognitiveAspect
Observations
Aspect-and-Modality_(SOSA-SSN).png
2016-05-24T00:53:00
CognitiveAspect
Status: *STABLE*
Describes the classification of CognitiveAspects. One application for this Aspect is Learning.
cognitive aspect
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe the CognitiveAspects of the Records.
0
1
1
bci:Collection
Session
Session.png
2016-06-28T04:57:00
[ESS]
Collection
Status: *STABLE*
Groups a collection of related Sessions and/or Interactions, which may be associated with different Activity(ies).
(*) Collection generalizes the concept of Study as defined in [ESS]: a set of data collection efforts to answer one or few related scientific questions. (*) This concept defines a longitudinal (temporal) collection of Sessions.
collection
1
bci:Command
Actuation
Actuation.png
2018-05-11T18:45:00
[Seydoux2016]
Command
Status: *STABLE*
Represents a specific order (based on a Record) to an Actuator to perform an Actuation. Typically, it depicts an instruction (or signal) that causes an Actuator to perform (executes) one of its basic functions, and thus, triggering an Actuation. A Command has the following intrinsic characteristics: (*) Defines the input for a set of Actuators. (*) Its source is a set of Records. [Seydoux2016] From the Actuation-Actuator-Effect (AAE) design pattern proposed for the IoT Ontology (IoT-O), this concept is based on the following definition: [Actuator] ------ (consumes) ------ [Input] [Actuator] -------- (executes) ---- [Command]
See general remark about: 2_MAPPINGS-TO-SAN
%GENERAL_EXAMPLE%@Actuation-Use-Case
command
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe specific types of Commands.
0
1
1
1
1
bci:Context
Context
Context.png
2016-04-17T21:45:00
[Shafer2001], [Unity]
Context
Status: *STABLE*
In [Shafer2001] we find the following definition of Context: Dey et al. define context as "any information that can be used to characterize the situation of entities (i.e., whether a person, place, or object) that are considered relevant to the interaction between a user and an application" (p. 106). Thus, context awareness implies two attributes of a system: the ability to obtain context and the ability to utilize contextual information. For the purpose of this ontology, a Context is the architectural description of the environment (external settings, components and procedures) on which a Subject interacts with it, during a Session.
A classification (class hierarchy) for Context has been not yet defined.
(*) Physical descriptions of any environment. (*) Simulations. (*) Video Games. (*) Virtual Reality environments.
context
1
bci:Context.AutonomousBeing
Context,Subject
Context.png
2018-04-12T23:47:00
Context.AutonomousBeing
Status: *STABLE*
Any self-contained and self-governed Context.Object able to react to Context.Events (stimuli) and act on its own, based on its specific Context.Capability-ies. This concept encompasses any living organism, such as humans and animals.
From the Context perspective, the architectural description of Context.AutonomousBeings, along with their Context.Capability-ies, gives a framework to the following modeling premises: (*) A Subject is a special kind of Context.AutonomousBeing. (*) From a Subject perspective, the notion of "act on its own" implies that a Subject can (Context.Capability) issues multiple Actions. (*) A Subject interacts with a Context through her Actions.
(*) Human beings (*) Animals (*) "Intelligent" machines/devices/artifacts (with well-defined Context.Capability-ies)
context autonomous being
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any Context.AutonomousBeing entities.
bci:Context.Capability
Context,Subject
Context.png
2018-04-15T23:47:00
Context.Capability
Status: *STABLE*
A capability is the ability of a Context.AutonomousBeing to perform (canPerform) or achieve certain actions (see: Action) or outcomes.
The architectural description of Context.Capability gives a framework to the following modeling premises: (*) A Context.Capability may raise (raises) Context.Events. (*) Through her Context.Capability-ies, a Subject can issue (issues) Actions.
capability
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any Context.Capability entities.
1
1
bci:Context.Event
Context
Activity.png
Context.png
2018-04-19T00:47:00
[Unity]
Context.Event
Status: *STABLE*
Captures a change of state on a set of related Context.Objects through the effectuation of their behavior (Context.Methods) in a time frame. Context.Event model any kind of "happening" or "occurrence" among a set of Context.Objects in the time line of a Context.Scene (temporality).
Regarding the alignment to DUL:Event: Below there are two groups of important notes and modeling considerations about its ontological implications: (*) Regarding its "ontological definition": (*) Defines a relationship to a time interval. Hence, it has an intrinsically temporal nature. (*) An event does not represent a capability from any object but a temporal object participation. (*) Architecturally speaking, it is natural to model sequences of related Context.Events: DUL:precedes/DUL:follows relations. (*) This ontology followed a modeling approach of participant-based classification of events. (*) Regarding "Causality": DUL:Event's definition refers to the "causality" nature of an event in two alternative set of views: (*) Aspectual views: "as a transition (something that has changed from a state to a different one)". (*) Intentionality views: "the causal analysis generates situations with different identities, according to what DUL:Description is taken for interpreting the DUL:Event". (...) "If intentionality is a criterion to classify events or not, this depends on if an ontology designer wants to consider causality as a relevant dimension for events' identity". The architectural description of events in a Context in this ontology fits the "Aspectual" view for the nature of causality: as a transition between states based on the Context.Methods of the Context.Objects.
Some event types from the Gaming domain are: (*) PointerEnter (*) PointerExit (*) PointerDown (*) PointerUp (*) PointerClick (*) UpdateSelected (*) InitializePotentialDrag (*) BeginDrag (*) EndDrag
context event
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any Context.Event entities. This class hierarchy includes the subclasses StimulusEvent, ActuationEvent, and Action.
1
0
bci:Context.Method
Context
Context.png
2018-04-15T23:47:00
[Unity]
Context.Method
Status: *STABLE*
A function, workflow, protocol, plan, algorithm, or computational method specifying how (set of formal rules) to perform an operation (usually represented as a verb-phrase) associated with a set of Context.Objects that defines a perspective of their expected behavior, which may change the state of related Context.Objects, and may raises Context.Events. A Context.Method is reusable and might be involved in many Context.Events. A Context.Method defines the steps to be carried out as part of an expected behavior associated with a set of Context.Objects.
The architectural description of methods in a Context in this ontology is based on the following modeling premises: (*) Object-Oriented Programming: This concept resembles the notion of method (of a class). (*) Unity domain: This concept is equivalent to the Script component. (*) BCI domain: This concept captures the primitives for a "Protocol" or "Procedure" in classical BCI experiments, that is, the behavior (methods) of the target Context.Objects that the Subject needs to pay attention to during a Session while performing an Activity.
From the Unity Gaming Modeling Architecture, the following components could be defined as methods: (*) Physics: A Context.Method that can define specific behavior based on Physics models. (*) Transform: A Context.Method that defines the logic of how a Context.Object can move. (*) Protocol or Procedure: some BCI applications may need to define a set of Context.Objects that the Subject needs to pay attention to.A specific type of a Transform could be defined to represent the logical behavior or movement of a Context.Object based on an algorithm.
Some method types from the Gaming domain are: (*) Drag (*) Drop (*) Scroll (*) Select (*) Deselect (*) Move (*) Jump (*) Run (*) Roll-Over (*) Hit (*) Throw (*) Beep (*) On (*) Off (*) Open (*) Close (*) Submit (*) Cancel
context method
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any Context.Method entities. This class hierarchy includes the subclass Context.Capability.
0
1
0
1
1
1
bci:Context.Object
Context
Context.png
2018-04-15T23:10:00
[Unity]
Context.Object
Status: *STABLE*
Captures the architectural description of a DUL:Object (any spatially located DUL:Entity with occurrences --temporality-- in some DUL:Events) that participates interactively in a Context.Scene with a specific Context.Role. A Context.Object has the following characteristics: (*) Spatial Location: Its scope is bound to a Context.Role for a specific Context.Scene. (*) Structural Composition: It is composed of a collection of Context.Objects. Hence, the structure of a Context.Object is a set of composite Context.Objects. (*) Functional Behavior: It is defined throughout a set of Context.Methods that capture the range of suited operations that can be performed to serve its Context.Role. This is the reason behind the interaction among Context.Objects. (*) Timeline (Period of Existence): It is bound to its participation in Context.Events for a specific Context.Scene.
(*) This concept resembles the notion of object in the Object-Oriented Programming paradigm: Context.Objects interact with one another (raises Context.Events) through their Context.Methods. (*) A special characterization of Context.Objects through the concept Context.AutonomousBeing, sets a proper architectural framework to model Subjects and Actions from the Context perspective.
Taken directly from the Unity Gaming Modeling Architecture, below are listed some components that may be defined as Context.Object types: (*) Audio: object with audio capabilities. (*) Camera: object with specific visual capabilities for perspectives. (*) Effects: object that can define specific visual effects. (*) Layout: object that can define specific layout configurations. (*) Video: object with video recording capabilities. From its Modeling Architecture, the description of "capabilities" in Unity gives a solid reference point to the notions of Context.Method and Context.Capability.
context object
Describing contextual interactive objects in any Context.Scene.
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe the contextual interactive entities that participate in any Context.Scene. This class hierarchy includes the subclasses Context.AutonomousBeing and ActuationTarget.
0
1
bci:Context.ObjectComponent
Context
Context.ObjectComponent.png
2018-03-14T03:35:00
[Unity]
Context.ObjectComponent
--Following closely the alignment to DUL, the concepts about Objects and Events are distinctly separated. Therefore, from a structural perspective, a Context.ObjectComponent is a Context.Object except for Context.ObjectComponent.Event which is changed to Context.Event.--$ 02:57 AM 2018-03-14 $
true
Status: *STABLE*
Captures the architectural description of a stand-alone entity ("logical" or "physical") that structurally forms part of a Context.Object.A Context.ObjectComponent can be compose of Context.ObjectComponents.
(*) Audio: A Context.ObjectComponent with audio capabilities. (*) Camera: A Context.ObjectComponent that defines a specific visual perspective for the Subject. (*) Effects: A Context.ObjectComponent that can define specific visual effects. (*) Layout: A Context.ObjectComponent that can define specific layout configurations. (*) Physics: A Context.ObjectComponent that can define specific behavior based on Physics models. (*) Transform: A Context.ObjectComponent that defines the logic of how an entity can move. (*) Protocol or Procedure: some BCI applications may need to define a set of Context.Objects that the Subject needs to pay attention to.A specific type of a Transform could be defined to represent the logical behavior or movement of an entity based on an algorithm. (*) Video: A Context.ObjectComponent with video recording capabilities.
context object component
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe the entities that can form any Context.Object.
bci:Context.Role
Context
Context.Role.png
2018-04-12T04:50:00
[Unity]
Context.Role
Status: *STABLE*
Specifies the purpose, mission or functional classification (a DUL:Role) of a Context.Object in a Context.Scene that defines the notions of its expected nature from the perspectives of: (*) Structure: What is it?. (*) Behavior: How does interact?. A Context.Object has one only Context.Role.
This ontology does not define any specific Context.Role subclasses.
From the Unity Gaming Modeling Architecture, the following are common roles found in any scene: (*) Character: The Context.Object participates as a character in the Context.Scene. That is, as an autonomous animated object (Context.AutonomousBeing) that interacts directly (in the foreground) with the Subject. For some BCI applications, this role describes all the Context.Objects that forms a "Protocol" or "Procedure"; that is, the ones that the Subject needs to pay attention to (the "target" objects). (*) Property: The Context.Object participates as a property in the Context.Scene. That is, as a co-dependant object that can influence the configuration of any object in a Context.Scene. (*) Scenery: The Context.Object participates as part of the scenery in the Context.Scene. That is, as an (autonomous animated) object (may be a Context.AutonomousBeing) that interacts indirectly (in the background) with the Subject. Some BCI applications implement simple Contexts, based solely on two structural roles for Context.Objects: (*) Background. (*) Foreground: where the "Protocol" or "Procedure" is implemented. Hence, the classification described above can be mapped with this terminology in the following way: (*) Background is equivalent to the scenery role. (*) Foreground is equivalent to the character role.
context object role
The classification of how a Context.Object participates in a Context.Scene.
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe the different ways ("Roles") in that Context.Objects can participate in a Context.Scene.
0
1
0
0
1
1
bci:Context.Scene
Context
Context.Scene.png
2018-04-16T22:24:00
[Unity]
Context.Scene
Status: *STABLE*
An ordered temporal part of a Context that captures possible relevant contextual interactions, i.e., a collection of Context.Objects interplaying with one another (sequence of Context.Events) in a specific way. A Context is compose of a non-empty sequence of Context.Scenes, based on the time dimension (temporality): a Context.Scene is related to other Context.Scenes based on its temporality (occurrence in its sequence). A Context.Scene can be composed of multiple Context.Scenes. The architectural description of a Context.Scene entity is depicted in the following way: (*) Structural: its associated Context.Objects (and their compositions). (*) Functional: the associated Context.Methods (and related Context.Events) via the Context.Objects that comprise its structure. (*) Temporal: the collection of all sequences of included Context.Events. A Context.Scene corresponds to the notion of World or Level on a Gaming platform.
(*) "In mathematics, a sequence is an ordered collection of objects in which repetitions are allowed. (...) Unlike a set, order matters, and exactly the same elements can appear multiple times at different positions in the sequence". Reference: [Wikipedia: Sequence]
(*) On a Simulation: [driving a car in a not-busy day on a freeway]; [driving a car in rush hour on a main city road]. (*) On a Video Game: World 3; Level 3-2.
context scene
1
1
1
1
1
bci:DataBlock
Results
DataBlock_(SOSA-SSN).png
=== ** previous definition of isValueOf ** <rdfs:subClassOf> <owl:Restriction> [ [1] [ ] ] ===
2018-01-17T05:25:00
[oldSSN], [SSN]
DataBlock
Status: *STABLE*
A DataBlock represents the basic/atomic physical data unit value of a RecordedData. Hence and following the previous definition in [oldSSN], a RecordedData [has as for Value] a non-empty sequence (with not allowed repetitions) of DataBlocks. In the BCI domain: (*) A RecordedData is compose of a non-empty sequence (with not allowed repetitions) of DataBlocks. (*) A DataBlock is considered a physical data entity (whereas a DataSegment is considered a logical data entity). (*) A DataBlock is related to another DataBlock based on its temporality: occurrence in its sequence. Therefore, a DataBlock may be sequentially linked to a following and a previous DataBlocks. (*) All the sequentially linked DataBlocks compose the value of a RecordedData. (*) The mechanism to access a DataBlock is the following: (*) First, one retrieves the locators of the AccessMethods associated to desired RecordedData. With this, one can have the access to the "data file". (*) Then, one can derived the correspondent DataBlocks locators, using the positional attributes: ordinal position, offset or timestamp.
"In mathematics, a sequence is an ordered collection of objects in which repetitions are allowed. (...) Unlike a set, order matters, and exactly the same elements can appear multiple times at different positions in the sequence". Reference: [Wikipedia: Sequence] For this specific concept, a sequence of DataBlocks does not allow repetitions.
Depending on its implementation nature, a BCI application may choose to use any (or both) of the positional attributes: (*) hasTimeStamp. (*) hasOffset.
See general remark about: 1_MAPPINGS-TO-SOSA-SSN
BCI data block
0
bci:DataFormat
Observations
RecordedData_(SOSA-SSN).png
2016-08-17T23:58:00
DataFormat
Status: *STABLE*
This concept describes any Data Format that BCI applications use to represent and store the RecordedData. A DataFormat represents any specific standard data format used in one of the following ways: (*) Signal (electrical engineering). (*) File format. (*) Content format.
A DataFormat should properly define its encoding scheme.
BCI data format
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe the data formats that BCI applications can use to represent and store the RecordedData. This ontology defines some common data formats used in the BCI domain.
1
0
0
bci:DataSegment
AnnotationTag
DataSegment.png
2016-06-12T22:05:00
DataSegment
Status: *STABLE*
It is a sequentially linked DataBlocks, and thus, identifies a proper subset of a RecordedData. A time interval is implicitly found between the first DataBlock (startTime) and the last DataBlock (endTime) of the DataSegment. In the BCI domain: (*) A DataSegment conforms the basic data unit for "tagging" purposes, i. e., to associate semantic annotations (event tags or Markers) to the data (DataBlock sets). (*) A DataSegment is a collection of DataBlocks that expands a certain time interval. (*) A DataSegment is considered a logical data entity (whereas a DataBlock is considered a physical data entity).
Right after a set of DataSegments is created, two consecutive tasks occur: (*) A set of editing processes is run on the recordings, which classify the data sets based on different Models. (*) A set of ResponseTags is created, which capture the information related to "what is so special about" a particular DataSegment.
data segment
0
1
1
1
bci:Descriptor
Descriptor
Descriptor_(SOSA-SSN).png
2016-06-30T01:30:00
[ESS], [XDF]
Descriptor
Status: *STABLE*
Describes an external Web resource that complements the information related to a specific entity found in this ontology. In a general sense, represents a class of information objects that describes metadata. Each individual of this class refers to an external file/document with extensive information to the associated metadata object. A Descriptor can have a related Descriptor set. This concept is defined as a subclass of DUL:InformationObject.
[XDF] and [ESS] BCI applications may extend this concept based on its purpose as an information object (practical usage of the Web resource). Thus, some subclasses of this concept could be: (*) Annotation. (*) Channel locations. (*) Descriptive metadata. (*) Event instance. (*) Experiment note. (*) Specification.
descriptor of an external resource
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any relevant type of Descriptors to capture specific external information that complements a metadata object.
1
0
1
1
bci:Device
Sensors
Device_(SOSA-SSN).png
=== ** scopeNote ** This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any relevant type of BCI devices to measure specific Modality(ies) for BCI activities, such as: (*) EEG (Electroencephalogram) device. (*) ECG (Electrocardiogram) device. (*) MoCap (Motion Capture) device. Example: LeapMotion (MoCap) Tracker. (*) Eye-Gaze device (for gaze or eye-tracking). Example: EyeTribe. (*) Audio device. (*) Video device. (*) Hand-Gesture device. (*) Keyboard device. (*) Mouse device. (*) Visual BCI device. This ontology does not define all the BCI devices listed above. ===
2017-08-20T22:40:00
[XDF], [SSN], [Compton2009]
Device
Status: *STABLE*
[SSN] A Device is a physical piece of technology (a system in a box) that implements a sensing method (similar as the concept of oldssn:SensingDevice) and, thus, observe some Modality (a sosa:ObservableProperty) of an Aspect (a sosa:FeatureOfInterest). In the BCI domain, a Device is a physical BCI device (or sensor) that is used to measure BCI activities. A Device, of course, collects data (represented by RecordedData) in an sosa:Observation (a Record).
See general remark about: 1_MAPPINGS-TO-SOSA-SSN
[Compton2009] According to the Sensor Ontology, a sensor has a set of independent cluster of concepts: (*) Domain: FeatureOfInterest and PhysicalQuality. (*) Abstract properties: OperationModel that defines a ResponseModel. (*) Concrete properties: SensorGrounding. A Device in this ontology corresponds to the concepts for Abstract properties (#2).
[oldSSN] Based on the guidelines explained in the following examples: (*) (5.3.12 Device), (*) (5.4.1 University deployment example -- 5.4.1.3 Sensor), (*) (5.4.2 Smart product example -- 5.4.2.2 Sensor), (*) (5.4.3 Wind sensor (WM30) -- 5.4.3.2 Wind Sensor system), and (*) (5.4.4 Agriculture Meteorology Sensor Network -- 5.4.4.1.3 Sensor view), Some of the core restrictions modeled initially from oldssn:SensingDevice are: (*) For ssn-system:SystemCapability, two distinct kinds of measurement capabilities are identified and defined: (*) Those used for defining the Channeling Spec: a set of Channels. These are associated indirectly via the DeviceChannelingSpec concept. (*) Other measurement capabilities not related to any channel definition: NonChannels. These are associated directly via a ssn:hasMeasurementCapability sub property, as follow: ssn:hasMeasurementCapability (hasNonChannelData) only ssn-system:SystemCapability (NonChannel): multiple instances. (*) ssn:observes (observes) only sosa:ObservableProperty (Modality): for BCI, it's implied that it only has one instance. (*) ssn:detects (detects) only ssn:Stimulus (StimulusEvent): multiple instances; details (what made) the sosa:Sensor input.
BCI device
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any relevant type of BCI devices to measure specific Modality(ies) for BCI activities.
1
1
bci:DeviceChannelingSpec
Descriptor,Sensors
Aspect-and-Modality_(SOSA-SSN).png
Descriptor_(SOSA-SSN).png
Device_(SOSA-SSN).png
2016-08-08T23:13:00
DeviceChannelingSpec
Status: *STABLE*
Each Device supports a specific channeling schema information: all the supported logical components (Channels) and their extended metadata that describe a "more concrete" subset of its Modality's data structure model and template, based on the Device's own physical spec of its operational features and functionalities. A DeviceChannelingSpec captures two information sets for a specific Device: (*) Its complete channeling schema description, in a form of an external document specification (outside the metadata repository): a specialized Descriptor. (*) Relevant metadata attributes regarding the specific characteristics of the Device's channeling schema: a set of related Channels. The structure described in a DeviceChannelingSpec (first information set mentioned above) is a functional subset of the ChannelingSpec defined for the Modality that the Device supports, following the [SSN] data model. Hence, for practical reasons, a DeviceChannelingSpec is defined as a subclass of ChannelingSpec.
Theoretically, a DeviceChannelingSpec could be defined as a specialized DeviceSpec concept. However, for practical reasons, this ontology aligns the definition of its first information set with ChannelingSpec.
device channeling schema spec
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any relevant type of a DeviceChannelingSpec to capture the external information that defines the channeling schema information of a Device.
0
bci:DeviceSpec
Descriptor,Sensors
Descriptor_(SOSA-SSN).png
Device_(SOSA-SSN).png
=== ** scopeNote ** (*) Manufacturer (source [XDF], [ESS]): manufacturer of the sensor (device). { History note } (*) [ESS 1.0] Corresponds to the (/study/summary/recordedModalities/modality/recordingDevice) node definition: name or type of recording device used to acquire data (manufacturer name). (*) [ESS 2.0] Corresponds to the (/study/recordingParameterSets/recordingParameterSet/channelType/modality/name) node definition: the name (brand) of the sensor device. For example: BioSemi, OptiTrack, SMI, etc. ===
2017-08-31T00:21:00
[SSN], [XDF], [ESS]
DeviceSpec
Status: *STABLE*
[SSN], [XDF] A DeviceSpec is an oldssn:SensorDataSheet (information object) that records (describes) specific properties (such as: hardware specs, power used, types of connectors, etc.) of a Device. It has been modeled as a composite object so that it can be composed as a set of DeviceSpecs to describe specific parts of a Device. In this way, a DeviceSpec is considered as a bag of descriptive properties about the Device. A DeviceSpec is a specialized Descriptor. The relevant set of a Device's properties are recorded directly (with hasChannelData and hasNonChannelData), but the DeviceSpecs can be used to record any other descriptive information related to the physical device, such as: (*) to record the manufacturers' specifications verses observed capabilities, or (*) if more is known than the manufacturer specifies, etc.
(*) The channeling schema that supports a Device is defined as an independent component from the DeviceSpec. A Device's channeling schema (DeviceChannelingSchema) is a subset of the generic ChannelingSchema defined for its correspondent Modality. (*) This ontology does not define any information object in particular of a DeviceSpec.
Some BCI applications based on [XDF], find important to keep information regarding the hardware specifications of its Devices. Hence, a BCI application could define a classification for different type of specifications, such as: (*) Hardware specs: (*) Manufacturer (source [XDF], [ESS]): manufacturer of the sensor (device). (*) Material (source [XDF]): conductive material of the sensor (e.g. Ag-AgCl, Foam, Plastic, Rubber). (*) Model (source [XDF]): model of the sensor. (*) Serial number (source [XDF]): serial number of the device. Its generalization was taken from the description of the "Gaze meta data". (*) Ownership specs: (*) Name / Label: a logical human-readable name or label of the device. (*) Organization: organization name that owns the device.
device specification
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe specific types of DeviceSpec.
bci:EegChannel
EEG
MeasurementCapability_(SOSA-SSN).png
2018-05-20T23:25:00
[XDF], [ESS]
EegChannel
Status: *STABLE*
Defines a broader type of an EEG Channel (channeling scheme information component), used in BCI applications to collect EEG (Electroencephalography) data. This concept relates directly to the notion of an electrode capturing brainwave activity.
See general remark about: EEG-CONCEPTS
BCI applications based on [ESS] and [XDF], could define the following channeling metadata attributes for an EegModality spec: (*) Label: (it could be defined as part of the RecordChannelingSpec, if it is the same value for all the channels) (*) [ESS 2.0]: a comma separated list of labels corresponding channels. This node is required for EEG Modality. (*) [XDF]: EEG channel label, according to the labeling scheme. For EEG, the preferred labeling scheme is 10-20 (or the finer-grained 10-5). (*) Placement: (it could be defined as part of the RecordChannelingSpec, if it is the same value for all the channels) (*) [ESS 2.0]: location of the reference channel or channels used during EEG or ECG recording. Should only be provided if the ModalitySignalType (Modality) is EEG or ECG. For EEG, the preferred location convention is presented below. Choose between the following values (or provide a new value if the reference is not any of these options): {"Right Mastoid", "Left Mastoid", "Mastoids","Linked Mastoids" [for electrically linked mastoids],"Cz" [top of the head],"CMS" [e.g. in BIOSEMI],"Left Ear","Right Ear","Ears","Average","Nasion","Nose"}. For Wilson Central Terminal ECG reference use "WCT". (*) [XDF]: (*) { LocationX, LocationY, LocationZ }: 3D position (measured location) of the electrode on the head's surface based on a coordinate system (frame of reference). Each value is described as: (*) { LocationX }: coordinate axis pointing from the center of the head to the right, in millimeters. (*) { LocationY }: coordinate axis pointing from the center of the head to the front, in millimeters. (*) { LocationZ }: coordinate axis pointing from the center of the head to the top, in millimeters. XDF states that if the used coordinate system is arbitrary, the application should then include well-known fiducials (landmarks) for co-registration. (*) LocationType = { 10-10, 10-20, 10-5, Custom, EGI }: channel location type/standard used. (*) [XDF] ChannelFormat = { double64, float32, int16, int32, int64, int8, string }: corresponds to the channel_format field of the StreamHeader chunk section. It's one of the 3 required fields in the XDF header. (*) [XDF] Signal Referencing Scheme: (*) isCommonAverage: (boolean data type); "true" if the subtracted reference signal was a common average, otherwise "false". (*) isSubtracted: (boolean data type); "true" if a reference signal has already been subtracted from the data, otherwise "false". BCI applications should include their own relevant dictionaries (placement, formats, etc.), as part of their proprietary extended semantic definitions.
EEG channel
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any relevant EegChannels, based on specialized EegModality(ies), that BCI applications may require.
0
1
bci:EegDevice
EEG
Device_(SOSA-SSN).png
2016-08-05T03:51:00
EegDevice
Status: *STABLE*
Defines a broader type of an EEG Device, used in BCI applications to collect EEG (Electroencephalography) data.
See general remark about: EEG-CONCEPTS
EEG device
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any relevant EegDevices, based on specialized EegModality(ies), that BCI applications may require.
bci:EegModality
EEG
Aspect-and-Modality_(SOSA-SSN).png
2018-06-06T23:30:00
EegModality
Status: *STABLE*
A specific type of Modality for EEG (Electroencephalography). This modality can be further classified depending on different measurement procedures, applications and set of stimuli.
See general remark about: EEG-CONCEPTS
EEG modality
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any relevant EEG modalities. Following, we present a possible classification for this type: (*) ERP (Event Related Potential -voltage-): related to a stimuli. (*) Visually Evoked Potential (VEP): Video. (*) Glaucoma (mfVEP - Vision Field Sensitivity). (*) TVEP: Transient. (*) AEP: Aural (*) "Free Run".
bci:EegNonChannel
EEG
MeasurementCapability_(SOSA-SSN).png
2016-08-10T06:03:00
EegNonChannel
Status: *STABLE*
The NonChannel of a specific EegDevice.
See general remark about: EEG-CONCEPTS
non-channeling EEG data component (other EEG measurement capability)
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any relevant NonChannels specific for an EegDevice.
1
bci:EegRecord
EEG
Record_(SOSA-SSN).png
2016-08-05T04:09:00
EegRecord
Status: *STABLE*
Defines a broader type of an EEG Record, which represents the class of observations for EEG (Electroencephalography) data.
See general remark about: EEG-CONCEPTS
EEG record
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any relevant EegRecords, based on specialized EegModality(ies), that BCI applications may require.
bci:EmotionalAspect
Observations
Aspect-and-Modality_(SOSA-SSN).png
2016-05-24T00:56:00
EmotionalAspect
Status: *STABLE*
Describes the classification of EmotionalAspects.
emotional aspect
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe the EmotionalAspects of the Records.
1
bci:FeatureParameter
AnnotationTag
FeatureParameter.png
2018-04-16T03:27:00
FeatureParameter
Status: *STABLE*
It is an auxiliary data analytics related to an "epoch" (pointed by a Marker) in the raw data (DataBlock) that describes an event of interest in the analysis (its semantics are captured by a ResponseTag). In practice, a set of FeatureParameters characterize the content of a ResponseTag. A FeatureParameter is commonly formally defined by an underlying mathematical/statistical Model, which describes how it is calculated. FeatureParameters are: (*) Relevant computed data (used-by or built) in developed algorithms/Models that analyse the raw dataset (DataBlock). (*) Pinpointed for each "epoch" of the raw dataset recording (DataBlock). An epoch is characterized by a ResponseTag marker; the temporal tagging of epochs (frequency-based, duration/interval, timestamped) depend on both the nature of the data and the analytical Models used over them. Thus, most of the features are considered to be transient information objects, i.e., they have an expiration date.
Some feature parameters for mfSSVEP (Steady-State Visually Evoked Potential with Vision Field Sensitivity) data sets for glaucoma patients are: (*) CCA correlations between the mfSSVEP signals and the set of reference signals (sinusoids or binary sequences) derived from each visual stimulus (one per each visual stimulus). (*) CCA coefficients of the reference signals that yield maximum correlation with the mfSSVEP signals (one per each reference signal and each visual stimulus). (*) Spectral Power Density (PSD) of the optimal combination of mfSSVEP signals maximally correlated with the set of reference signals derived from each visual stimulus (one per each visual stimulus). Note: the above parameters may be the differences between the PSDs when the visual stimulus was presence and absent from the stimuli patterns.
feature
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any FeatureParameter entities.
bci:ImpactedProperty
Actuation
Actuation.png
2018-05-09T23:53:00
[SSN], [Seydoux2016]
ImpactedProperty
Status: *STABLE*
[SSN] It represents an actuatable quality (property or characteristic) of an ActuationTarget. An Actuator connects to an ImpactedProperty via the object property ssn:forProperty. [Seydoux2016] Following the Actuation-Actuator-Effect (AAE) design pattern proposed for the IoT Ontology (IoT-O), this concept captures the definition of ImpactedProperty (linked to san:Effect) from the following relationships: [san:Effect] ------------------ (impacts) -------------- [ImpactedProperty] [ImpactedProperty] ------ (is property of) ------ [sosa:FeatureOfInterest] An Actuator triggers an ActuationEvent that causes an effect (modification) on the ActuationTarget: ImpactedProperty.
For a complete interpretation about the san:Effect definition and its related semantic notions for this ontology, please refer to the editorial note in ActuationEvent.
See general remark about: 2_MAPPINGS-TO-SAN
%GENERAL_EXAMPLE%@Actuation-Use-Case
[SSN] A window actuator (Actuator) acts by (triggers) changing the state (ActuationEvent changes) between a frame and a window (ActuationTarget). The ability of the window to be opened and closed is its ImpactedProperty.
impacted property (as a consequence of an actuation effect)
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any relevant ImpactedProperty classification.
0
2
2
0
1
bci:Interaction
Session,Subject
Subject.png
2018-04-17T21:05:00
Interaction
Status: *STABLE*
A situation where multiple (more than one) Subjects interact with each other, while each is performing a single Activity. Commonly, it is expected that all the Subjects in an Interaction engage in the same Activity, but it is not required. Due that an Interaction groups a set of Sessions (potentially many for each Subject), BCI applications can make correlations among these Sessions.
(*) This concept defines a Cluster of Sessions: a cross-sectional collection of multiple related Sessions that occur at the same time. (*) In an Interaction where multiple Subjects are performing different Activity-ies under the same Context, the Actions done by the Subjects are modeled as different Context.Events in (possible) multiple Context.Scenes.
Interaction of multiple subjects
1
1
1
bci:Marker
AnnotationTag
Marker_(SOSA-SSN).png
2018-04-17T23:10:00
Marker
Status: *STABLE*
Corresponds to the "entry points" of the annotation tags of the data.
annotation tag (or data segment pointer)
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe the type of Markers (or Annotation Tags) that define "entry points" in DataSegments. This ontology defines two types of Markers: the ResponseTag and the StimulusTag.
0
1
bci:Modality
Observations
Aspect-and-Modality_(SOSA-SSN).png
=== ** scopeNote ** This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any relevant type of human signals (Modality Signal Type --nature of the data--) analyzed by BCI applications, such as: (*) EEG (Electroencephalogram). (*) ECG (Electrocardiogram). (*) MoCap (Motion Capture). (*) Eye-Gaze (for gaze or eye-tracking). (*) Audio. (*) Video. (*) Hand-Gesture. (*) Keyboard-Hit. (*) Mouse-Click. This ontology does not define all the modalities listed above. ===
2016-08-20T05:19:00
[SSN], [Compton2009]
Modality
Status: *STABLE*
[SSN]: A Modality is a kind of an "Observable Quality", i. e., an aspect (the human signals) of an entity (the human body) that is intrinsic to and cannot exist without the entity and is observable by a sensor (Device). In the BCI domain, the Modality defines a certain type of measurement (classification) related to a specific kind of data due to its nature. Literally, Modality means the "Mode of the data". The Modality defines, in an intrinsic manner, the operational functionality of any Device based on its related ChannelingSpec information. That is, a specific type of Device operates for a specific type of Modality: the nature of the data sensed. Each Modality must have its own complete and generic ChannelingSpec information.
A Modality has its own specific: (*) Measurement procedures, (*) Applications (each one with relevant attributes), and (*) Stimuli.
See general remark about: ASPECT-and-MODALITY
The following descriptions capture the definition of this concept ([oldSSN: Property] and [Compton2009]) adjusted to this ontology: (*) An observable Quality of human physiological signals. That is, a characteristic of an Aspect (human body's state) that is intrinsic to and cannot exist without the Aspect and is observable by a Device. (*) Devices observe physiological signals (Modality-ies) of Aspects: for example, the EEG signals (Modality) of an emotion (Aspect).
[ESS 1.0]: (*) This data object describes the name of the different type of modalities recorded in a study. Corresponds to the (/study/summary/recordedModalities/modality/name) node definition. [ESS 2.0]: (*) It contains information about one or more set of recording data parameters (which can apply to multiple Records). (*) Corresponds to the (/study/recordingParameterSets/recordingParameterSet) node definition. (*) Most studies have only a single parameter set, i. e., the same types of data (EEG, Mocap, etc.) are recorded in the same channel ranges, with the same device types and with the same sampling rates. (*) This "recordings parameter set" is associated with Records nodes (which represent the "dataRecording" nodes).
recorded modality
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any relevant type of human signals (Modality Signal Type --nature of the data--) analyzed by BCI applications.
0
1
1
1
1
1
bci:Model
AnnotationTag
Model_(SOSA-SSN).png
2018-04-16T00:38:00
Model
Status: *STABLE*
Describes a Machine Learning Model (commonly, a mathematical optimization or computational statistics algorithm for predictive analytics) that "detects something" in a DataSegment. A common name given for a Model is classifier.
In the BCI domain, a Model can generate many different results related to a ResponseTag: each one, can be depicted as a FeatureParameter.
Right after a set of DataSegments is created, two consecutive tasks occur: (*) A set of editing processes is run on the recordings, which classify the data sets based on different Models. (*) A set of ResponseTags ;is created, which capture the information related to "what is so special about" a particular DataSegment.
model
bci:NeurologicalAspect
Observations
Aspect-and-Modality_(SOSA-SSN).png
2016-05-24T00:47:00
NeurologicalAspect
Status: *STABLE*
Describes the classification of NeurologicalAspects. One application for this Aspect is glaucoma monitoring.
neurological aspect
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe the NeurologicalAspects of the Records.
bci:NonChannel
SystemCapabilities
MeasurementCapability_(SOSA-SSN).png
=== ** editorialNote ** The BCI ontology defines this modeling structure for EEG Devices: see the definition of the Object Property hasEegNonChannelData. ===
2017-08-30T22:34:00
[SSN]
NonChannel
Status: *STABLE*
[SSN] The NonChannel of any Device describes a set of measurement properties (ssn-system:SystemProperty-ies) of a sensor (sosa:Sensor) in specific conditions, as explained in [SSN] System Capabilities Module and [oldSSN] MeasuringCapability Module, that are not related directly to any DeviceChannelingSpec. Note that the measurement properties describe in this concept are of a sensor (Device subclass of sosa:Sensor), not of a specific observed measurement (Record subclass of sosa:Observation).
See general remark about: 1_MAPPINGS-TO-SOSA-SSN
See general remark about: UNITS-OF-MEASUREMENT
This ontology leaves open to BCI applications the way how they should describe properly basic non-channeling measurement capabilities for its relevant set of different classes of sensors (Device class hierarchy) used in BCI activities. A modeling guideline (based on [oldSSN]: oldssn:MeasurementCapability) can be found at: (*) (5.3.5 MeasuringCapability -- 5.3.5.2 How to describe capabilities of a sensor?), (*) (5.4.2 Smart product example -- 5.4.2.3 Measurement capabilities). The sensor ontology does not restrict the way in which specific measurement properties (oldssn:MeasurementCapability) are described. Thus, specialized applications may defined their own values of measurement properties (oldssn:MeasurementCapability). (oldssn:MeasurementCapability maps to ssn-system:SystemCapability) If necessary, BCI applications should (but are not require to) define a set of restrictions and specialized connections (subproperties) on the property hasNonChannelData (subproperty of ssn-system:hasSystemCapability) for each particular subclass of Device (subclass of sosa:Sensor), which describes sensors for specific types. A relevant non-channeling measurement property (ssn-system:SystemProperty) related to a Device is sampling rate. Based on the modeling of the ssn-system:SystemCapability and ssn-system:SystemProperty concepts (along with the guidelines found in 5.4.2 Smart product example -- 5.4.2.2 Sensor), this ontology defines the SamplingRate concept.
non-channeling data component (other BCI measurement capability)
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe a set of relevant non-channeling measurement capabilities for each type of Device.
0
1
1
1
1
bci:Playout
Session,Context
Playout.png
2018-04-06T02:32:00
Playout
Status: *STABLE*
Describes the data logging (recording) of the dynamic state of the Context: the "play out" of the happenings (Context.Events). A Playout consist of many PlayoutInstant(ces).
playout record
1
bci:PlayoutInstant
Session,Context
PlayoutInstant.png
2018-04-06T03:03:00
PlayoutInstant
Status: *STABLE*
Captures any relevant entry log in a Playout (related to Context.Events). Two specific type of instances are defined: (*) PlayoutInstant.SubjectAction. (*) PlayoutInstant.ContextEvent.
playout instant
Describing entities that form any log entry in a Playout.
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe the types of log entries in a Playout. Two important types of log entries are defined in this ontology.
bci:PlayoutInstant.ContextEvent
Session,Context
PlayoutInstant.png
2018-02-11T03:03:00
PlayoutInstant.ContextEvent
Status: *STABLE*
Captures a relevant entry log in a Playout of a Context Event issued by a Context.Event instance during a Session.
playout instant: context event type
Events (Context.Event) issued in a Context during a Session.
bci:PlayoutInstant.SubjectAction
Session,Context
PlayoutInstant.png
2018-02-11T02:57:00
PlayoutInstant.SubjectAction
Status: *STABLE*
Captures a relevant entry log in a Playout of a Subject's Event issued by an Action instance during a Session.
playout instant: subject action type
Events (Actions) issued by a Subject during a Session.
bci:ProtocolBuffersDataFormat
Observations
RecordedData_(SOSA-SSN).png
Google Protocol Buffers DataFormat
2016-06-03T02:25:00
ProtocolBuffersDataFormat
Status: *STABLE*
Represents a Protocol Buffers DataFormat, which is a language-neutral, platform-neutral, extensible way of serializing structured data for use in communications protocols and data storage.
protocol buffers BCI data format
0
1
1
1
1
1
1
1
1
1
1
1
1
1
1
bci:Record
Observations
Record_(SOSA-SSN).png
=== ** scopeNote ** This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any relevant type of BCI records generated by BCI devices for BCI activities, such as: (*) EEG (Electroencephalogram) record. (*) ECG (Electrocardiogram) record. (*) MoCap (Motion Capture) record. (*) Eye-Gaze record (for gaze or eye-tracking): A BCI Record that stores the coordinates of user's eye gaze captured by eye trackers. (*) Audio record. (*) Video record. (*) Hand-Gesture record: A BCI Record that stores the coordinates and velocities of user's hands and fingers captured by trackers such as LeapMotion. (*) Keyboard-Hit (keystroke) record: A BCI Record that stores the subject's hits on different keyboard keys. (*) Mouse-Click record: A BCI Record that stores the position coordinates of a subject's clicks with different mouse buttons. (*) Visual BCI record. This ontology does not define all the BCI records listed above. ===
2018-01-29T02:36:00
[SSN], [ESS]
Record
Status: *STABLE*
A Record is a type of sosa:Observation with the following characteristics: (*) A single sosa:Observation for a specific unimodal BCI data capture task (with its own purpose). (*) [SSN] A Sensing Method (procedure) is used to estimate or calculate a value of a specific sosa:ObservableProperty (Modality) based on a specific sosa:FeatureOfInterest (Aspect). (*) A single Device is used to observe the unimodal BCI data (RecordedData). [oldSSN]: Record, along with its related concepts, defines an appropiate structure based on the following description: "An observation (Record) is a situation that describes an observed feature (Aspect), an observed property (Modality), a sensor (Device) and method of sensing used and a value (RecordedData) observed for the property: that is, an observation (Record) describes a single value (RecordedData) attributed to a single property (Modality) by a particular sensor (Device)". In the BCI domain, it's common that some related observations occur immediately after an observation has ended,by changing some of its initial channeling or Device settings (parameters or conditions).Hence, it is desirable to keep a temporal tracking of the previous and following related observations.This is achieved via the hasPrevious and hasNext object properties. The logical data structure template of a Record is defined in its associated RecordChannelingSpec information object. General and consolidating data analytics for the whole raw data recordings can be associated for a Record with a set of related FeatureParameters. Additional relevant metadata can be extended via the object property hasMeasurementProperty.
Following the structure presented in this example, the Record concept has been modeled to describe BCI observations, including the following object properties: [ [ [SOSA object property] [BCI object subproperty] [Between (from)...] [... and (to)] ] ] [ [sosa:hasFeatureOfInterest] [aspectOfInterest] [sosa:Observation (Record)] [sosa:FeatureOfInterest (Aspect)] ] [ [sosa:observedProperty] [observedModality] [sosa:Observation (Record)] [sosa:ObservableProperty (Modality)] ] [ [sosa:madeBySensor] [observedByDevice] [sosa:Observation (Record)] [sosa:Sensor (Device)] ] [ [ssn:wasOriginatedBy] [---] [sosa:Observation (Record)] [ssn:Stimulus (StimulusEvent)] ] [ [sosa:hasResult] [observationResult] [sosa:Observation (Record)] [oldssn:SensorOutput (RecordedData)] ] [ [Notion taken from --oldssn:isProducedBy--] [isProducedByDevice] [oldssn:SensorOutput (RecordedData)] [sosa:Sensor (Device)] ] ] ] Adequate restrictions have been designed accordingly for each object property.
See general remark about: 1_MAPPINGS-TO-SOSA-SSN
See general remark about: PROCEDURES
[oldSSN] Following the guidelines explained in (5.3.6 Observation) and (5.4.2 Smart product example -- 5.4.2.4 Observation), the main restrictions modeled for sosa:Observation (Record) are: (*) Exactly 1 sosa:FeatureOfInterest (Aspect): details what was sensed. (*) Exactly 1 sosa:ObservableProperty (Modality): details what was sensed. (*) Exactly 1 sosa:Sensor (Device): describes what made the Observation. (*) Some ssn:Stimulus (StimulusEvent): details (what made) the sosa:Sensor input. Other restrictions are: (*) Exactly 1 oldssn:Sensing (sub subclass of sosa:Procedure, it describes how the Observation was made): not adjusted for BCI activities.
[oldSSN] states that "an Observation is a description of the context, the Situation, in which the observation was made". In this ontology, the Context is directly related through the Session, which is a Situation where the Observation (Record) was made.
BCI record (measurement record)
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any relevant type of BCI records generated by BCI devices for BCI activities.
1
1
1
1
bci:RecordChannelingSpec
Descriptor,Observations
Aspect-and-Modality_(SOSA-SSN).png
Descriptor_(SOSA-SSN).png
Record_(SOSA-SSN).png
2016-08-08T23:15:00
RecordChannelingSpec
Status: *STABLE*
Based on the adjusted settings of the DeviceChannelingSpec made for the observation, a Record has its own specific channeling schema information: that is, the logical components (Channels) and their extended generic metadata set that describe the Record's own logical data structure (specific to the observation and the Subject), according to the recording setup. A RecordChannelingSpec captures two information sets for a specific Record: (*) Its complete channeling schema description, in a form of an external document specification (outside the metadata repository). (*) Relevant metadata attributes regarding the general characteristics of the channeling schema: a set of related Channels. The structure described in a RecordChannelingSpec (first information set mentioned above) is based on a concrete subset of the DeviceChannelingSpec that the Device supports. Hence, for practical reasons, a RecordChannelingSpec is defined as a subclass of DeviceChannelingSpec.
The channeling schema information objects are structured in the following way: The Channeling Schema of a... (*) Modality (ChannelingSpec): it's the complete theoretical spec; the generic template for a specific sosa:ObservableProperty. (*) -------- Device (DeviceChannelingSpec): it's a functional subset of the ChannelingSpec; -------- defines the logical subset of the complete spec for the specific functionality of a oldssn:SensingDevice. (*) ---------------- Record (RecordChannelingSpec): it's the concrete subset of the DeviceChannelingSpec for a specific sosa:Observation. ---------------- This information object is user specific according to recording setup.
(*) A RecordChannelingSpec for an EegRecord would define the values of the positions for specific EegChannels used by the EegDevice, when the observation is made. A proper name for this spec would be EegRecordChannelingSpec. (*) For "Precision" Records, BCI applications may find important to keep the information regarding the Channel's coordinates).
Related to the RecordChannelingSpec, this concept has the following definitions: (*) [ESS 2.0]: a comma separated list of labels of the corresponding referenced channel or channels. This node is required for EEG Modality and it's used during EEG or ECG recording. For example, if using 10-20 system and numerical average of both mastoids, use "A1, A2" for {referenceLabel} and "Mastoids" for {referenceLocation}. Note that there could be multiple labels. (*) [XDF] Signal referencing scheme: name of the dedicated reference channel(s), if part of the measured channels (repeated if multiple). For an EEG channel label, its value is based on the labeling scheme. For EEG, the preferred labeling scheme is 10-20 (or the finer-grained 10-5).
record channeling schema spec
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any relevant type of a RecordChannelingSpec to capture the external information that defines the channeling schema information of a Record.
0
bci:RecordSpec
Descriptor,Observations
Descriptor_(SOSA-SSN).png
Record_(SOSA-SSN).png
2016-07-19T03:23:00
[XDF], [ESS]
RecordSpec
Status: *STABLE*
A RecordSpec is an information object that records (describes) specific properties (such as: specs of assistant materials, ambience settings, tools, etc.) regarding how a Record was made. Similarly to DeviceSpec, the structure of RecordSpec has been modeled as a composite object so that it can be composed as a set of RecordSpecs to describe specific parts on how a Record was made. In this way, a RecordSpec is considered as a bag of general, extended and descriptive properties about the Record settings. A RecordSpec is a specialized Descriptor. RecordSpecs can be used to record any other descriptive and extended information related to any settings or conditions on how the Record was made.
(*) The channeling schema of a Record is defined as an independent component from the RecordSpec. A Record's channeling schema (RecordChannelingSchema) is a subset of the DeviceChannelingSchema defined for its correspondent Device. (*) This ontology does not define any information object in particular of a RecordSpec.
Some BCI applications based on [XDF], find important to keep information regarding what assistant materials (hardware) and how they were used when the Record was made. Hence, a BCI application could define a classification for different type of specifications, such as: (*) Hardware specs for EegRecords: (*) Coupling (source [XDF]): type of coupling used (e.g. Capacitive, Dry, Gel, Saline). (*) Surface (source [XDF]): type of the contact surface (e.g. Bristle, Pad, Pins, Plate).
record specification
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe specific types of RecordSpec.
1
1
0
1
1
1
bci:RecordedData
Observations,Results
Record_(SOSA-SSN).png
RecordedData_(SOSA-SSN).png
2018-02-10T03:24:00
[oldSSN], [SSN]
RecordedData
Status: *STABLE*
[oldSSN] It is a specific type of a oldssn:SensorOutput (aligned to sosa:Result), which is a piece of information outputted by a Device in an sosa:Observation: an observed value for BCI activities. Based on [oldSSN], the value itself is being represented by a specific type of an oldssn:ObservationValue: a sequence of DataBlocks. This concept abstracts a raw data set (independent of its representation and access method) outputted by a Device for a specific Modality. In this way, a RecordedData has: (*) A single data representation, a DataFormat, and (*) multiple AccessMethods (either archived or in real-time). In the BCI domain, it's common that the data "evolves" over time. That is, there are changes on the data structure: (*) from its "initial" state (ever since it's collected from a Device: raw data) (*) to "following" states (when applying specialized algorithms to recognizes patterns throughout Models: transformed data). For example, for EEG data its evolution over time resembles a tree structure. Hence, it is desirable for BCI applications to keep a temporal tracking of the previous and following versions of the data: a derived data tree throughout keeping links between data versions. This is achieved via the hasPrevious and hasNext object properties.
From the perspective of the data (RecordedData): (*) Its physical structure is defined through the DataFormat. (*) Its logical structure is defined through the associated RecordChannelingSpec of its Record (the defined collection of Channels).
See general remark about: 1_MAPPINGS-TO-SOSA-SSN
In [ESS 1.0]: (*) Corresponds to the "eegRecordings" node: a specific collection of raw BCI data collected from a subject in a specific session. In [ESS 2.0]: (*) Corresponds to the "dataRecordings" node: information about EEG (or other data modality) recordings.
recorded BCI data
1
1
1
bci:ResponseTag
AnnotationTag
Marker_(SOSA-SSN).png
2018-01-02T03:01:00
ResponseTag
Status: *STABLE*
Information object that captures a Marker issued by a Model. In the BCI domain, a natural (physiological or neurological) change in the Subject's state while doing an Activity, is simply called a State. A ResponseTag could not be directly linked to a change in the Context (induced by a Context.Event, specifically a StimulusEvent) in a Session. A ResponseTag represents "something" detected by a Machine Learning Model (Model). Its content is represented by a set of related FeatureParameters.
Right after a set of DataSegments is created, two consecutive tasks occur: (*) A set of editing processes is run on the recordings, which classify the data sets based on different Models. (*) A set of ResponseTags ;is created, which capture the information related to "what is so special about" a particular DataSegment.
This is one of the most important concepts in this ontology.
In a common M2M semantic search query, the following input parameters may be used to retrieve a set of ResponseTags related to its DataSegments: (*) Activity: for example, driving. (*) Aspect: for example, vigilance and alert. (*) Modality: for example, EEG, EOG. (*) Subject: filtered by gender, age, etc.
state (response tag)
bci:SamplingRate
SystemCapabilities
SamplingRate_(SOSA-SSN).png
2017-08-30T23:52:00
[ESS], [XDF]
SamplingRate
Status: *STABLE*
Sampling rate of the Device. Its measurement unit is Hertz (Hz). As a relevant non-channeling measurement property (ssn-system:SystemProperty) related to a Device its modeling is based on: (*) the ssn-system:SystemCapability and ssn-system:SystemProperty concepts, and (*) the guidelines found in (5.4.2 Smart product example -- 5.4.2.2 Sensor).
Related concept for a Record: hasSamplingRate.
See general remark about: UNITS-OF-MEASUREMENT
sampling rate of a device
0
0
1
0
0
1
1
1
1
1
1
1
1
bci:Session
Session
Session.png
2018-04-17T22:03:00
[ESS], [SSN]
Session
Status: *STABLE*
A Session monitors how one Subject interacts with one Context while performing one Activity, throughout collecting a nonempty set of multimodal biomedical sosa:Observations (Records) and/or sosa:Actuations (Actuations).A Session has the following characteristics and restrictions: (*) Comprises a collection of multimodal BCI data capture tasks (each one with its own specific measurement purpose: Aspect). (*) Monitors exactly one Subject. (*) Monitors exactly one Activity (performed by the Subject). (*) Monitors exactly one Context (while the Subject interacts with it). (*) Comprises exactly one Playout collected from the associated Context. (*) Groups different and multiple Records (multimodal data) that are observed (collected) simultaneously from the Subject. (*) May group different and multiple Actuations simultaneously from the Subject. [oldSSN]: The concept of Session defines an appropiate structure to group multiple Records (multimodal data), based on the following description: "Observations (Records) of multiple features (Aspects) or multiple properties (Modality-ies) of the one feature should be represented as either compound properties, features and values or as multiple observations, grouped in some appropriate structure". (A similar depiction applies to Actuation). In the BCI research domain, a Session can have multipurpose extended metadata sets to describe broader concepts and definitions regarding the nature and purpose of this information object. These external metadata sets can be associated with Descriptors.
See general remark about: PROCEDURES
External descriptions that complement and extend the information about a Session, can be added through Descriptors. BCI applications based on [ESS 1.0] could define Descriptors to include information, such as: (*) Lab. ID. -- identifier of the session used in the original lab notes (if available, otherwise insert 'NA'). (*) Task Label. -- indicates which task is being performed in the session (e.g. A, B, C,...). Only use this node if there are different tasks. Otherwise leave the node blank. (*) For Session entities: If different tasks occur in the same session repeat the session node with a different "taskLabel", and other information that may be different, such as the "eegRecording" node. (*) For EventCode entities: Use this only if there are multiple tasks in the study and they use the same event codes, otherwise leave blank.
In [ESS 2.0], a Session is related to the RecordedParameterSet concept, found in the following XML element: (../recordingParameterSet/recordingParameterSetLabel). The "recordingParameterSet" node groups the information of multiple Modality-ies and, also, DeviceSpecs.Hence, it implies that a Session (a Record set) is associated (used) with multiple Devices, due of the "dataRecording" node definition in ESS (multiple RecordedData).
BCI session
1
bci:StimulusEvent
Context,Observations
StimulusEvent_(SOSA-SSN).png
=== ** HED class in EXAMPLE SPEC ** [ [ [This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any XXX] [ (*) HED.Auditory (*) HED.Pain (*) HED.Smell (*) HED.TMS (*) HED.Tactile (*) HED.Taste (*) HED.Visual ] [Defines a general bStimuli HED.] [HED.Stimulus] [HED.Stimulus_Visual.png] [[HED]] ] ** HED attribute for the HED class in EXAMPLE SPEC ** [ [ Event code tag, based on Hierarchical Event Descriptor (HED) Tags for Analysis of Event-Related EEG Studies document, (if available, otherwise leave blank).] [ ] ===
2018-04-11T13:08:00
[SSN]
StimulusEvent
Status: *STABLE*
A StimulusEvent describes an event that triggers a stimulus to the Subject during a Session. By its own nature, it may affect the Subject's performance of the Activity (and, therefore, a set of Actions related to the Activity). A StimulusEvent is an external happening on a specific Context that generates the input for the sensors ([SSN] concepts). Contextually, these events are raised (raises) by a set of interacting Context.Objects through a Context.Method. In [SSN], this concept is a subclass of ssn:Stimulus and, therefore, a oldssn:SensorInput, which describes the (data) input for the sosa:Sensors. In the BCI domain, this concept is simply called an "Event": the stimuli or trigger that causes the relevant measurement to be, in fact, processed or analyzed.
The following descriptions capture the definition of this concept ([oldSSN: Stimulus], 5.3.1.2.1 Stimuli) adjusted to this ontology: (*) StimulusEvents are detectable changes in the environment (Context), i.e., in the physical or a virtual world. (*) A StimulusEvent is an triggers" the Device. (*) StimulusEvents can either be directly or indirectly related to observable Modality-ies and, therefore, to Aspects. (*) The same types of StimulusEvents can trigger different kinds of Devices and be used to reason about different Modality-ies. (*) The Modality-ies associated to the StimulusEvent may be different to eventual observed (Record) Modality. (*) It is the StimulusEvent, not the Context.Object that triggers the Device. (*) A StimulusEvent may only be usable as proxy for a specific region of an observed Modality.
A StimulusEvent describes a specific component of the Context that "generates" an annotation tag (StimulusTag). Some examples of a StimulusEvent are: (*) Red light for 15 seconds at a 66 Hz. frequency. (*) Green light for 10 seconds at a 74 Hz. frequency.
stimulus event
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe any relevant contextual event that describes the stimuli or trigger that causes a relevant BCI measurement.
1
bci:StimulusTag
AnnotationTag
Marker_(SOSA-SSN).png
2016-05-20T18:48:00
StimulusTag
Status: *STABLE*
Information object that captures a Marker induced by a StimulusEvent. While doing the data recording, the system automatically creates a Marker (StimulusTag) for the oldssn:SensorInput based on the Context (issued by a Context.Event, specifically a StimulusEvent) of the Session.
This is one of the most important concepts in this ontology.
stimulus tag
0
1
1
0
1
bci:Subject
Subject
Context.png
Subject.png
2018-06-05T23:49:00
[ESS], [XDF]
Subject
Status: *STABLE*
A specific physical (natural) person (probably anonymous but possessing unique identity) with certain attributes on which the Sessions are recorded (from which the data is observed: Record). A Subject interacts with a Context through her Actions. "Subject" comes from the common terminology used in BCI experiments when referring to a specific person. This concept is based on the notion of Electronic Patient/Medical Record, such as the HL7 Record. Information objects related to this concept (namely SubjectState), capture the description of the Medical and Physiological "Condition" of a Subject in a Session. Thus, a Subject may have multiple Descriptors associated with it, such as HL7 Records or specific XML vocabularies from the industry. This ontology does not define any specific set of attributes associated to a Subject. BCI applications can extend this concept according to their information needs and system requirements.
The subject is the point of reference (focus) of the data monitoring and data analysis, from which BCI applications collect Measurement Recordings. Hence, the name Subject instead of Person.
[ESS] and [XDF] define some useful data type properties (attributes) associated to a Subject. Some examples of these attributes are: (*) Gender: (*) Defined as an enumerated value = { Female, Male, ... }. (*) It can be derived as a subproperty extended from the (dbp:Person).sex property definition. (*) Year of birth (YOB): (*) Defined as a positive integer greater or equal than 1900. (*) It can be derived as a subproperty extended from the dbp:Person definition. (*) Handedness: (*) Defined as an enumerated value = { Ambidextrous, Left, N/A, Right }. (*) Subject's dominantly used hand. Related to medical record. (*) Hearing: (*) Defined as an enumerated value = { CorrectedToNormal, Impaired, Normal }. (*) Subject's hearing condition. Related to medical record. (*) Vision: (*) Defined as an enumerated value = { CorrectedToNormal, Impaired, Normal }. (*) Subject's vision condition. Related to medical record.
person
bci:SubjectState
Descriptor,Session,Subject
Descriptor_(SOSA-SSN).png
Session.png
2016-06-29T05:00:00
[ESS], [HED]
SubjectState
Status: *STABLE*
Describes the state of the Subject during the Session, throughout a collection of external specifications which capture extended metadata of the Subject's overall state. A state can be further classified properly to document more accurately the nature of the metadata (such as physiological state, cognitive state or emotional state). The nature of this concept is "transient" and depends directly on the Session: it is considered as an extended collection of metadata related to the Session that captures the overall state of the Subject during the data recording. A SubjectState is, itself, a specialized Descriptor that may have multiple Descriptors associated with, which describe extended metadata sets such as the HL7 Record.
Some examples of SubjectState may include descriptions regarding: (*) Physiological state: (*) [ESS] Age: Subject's age (in years) at the time of the Session. (*) [ESS] Height: Subject's height in centimeters (at the moment of the Session). (*) [ESS] Weight: Subject's weight in kilograms (at the moment of the Session). (*) [ESS] Hearing: Subject's hearing (e.g. "CorrectedToNormal", "Impaired", "Normal"). (*) [ESS] Vision: Subject's vision (e.g. "CorrectedToNormal", "Impaired", "Normal"). (*) [ESS] Caffeine: number of hours since last caffeine intake, if less than 12 hours. (*) [ESS] Alcohol: whether the Subject has consumed alcohol within 24 hours before the Session (a logical value). (*) [ESS] Medication: specification of the medication intake based on different parameters (time, chemical compounds, etc.). (*) Drowsiness: identified in [HED 1.31] as "awake". (*) Stress level. (*) [HED 1.31] Emotional state: (*) Alertness. Some additional metadata related to this concept used for research purposes could be: (*) A set of attributes to label the identity of the Subject in the Session. Example: (*) [ESS] A Lab. ID as a de-personalized Subject identifier in the research lab. (*) [ESS] A sequential ID to identify the Subject in a collection of Sessions. (Case: "InSessionNumber" attribute in [ESS 2.0]). (*) [ESS] An attribute to identify the group type that the Subject belongs to based on the research nature of the Sessions. Example: a Session Group to identify the Subject's group (e.g. "Autistic", "Normal", "Control", etc.).
Subject's state during a specific session
%APPLICATION%@cerebratek_nupod
This class is intended to be the root of a class hierarchy. Domain-specific applications can extend this class hierarchy for their own purposes to describe the different types of SubjectStates that can be found in a Session.
bci:XdfDataFormat
Observations
RecordedData_(SOSA-SSN).png
The XDF DataFormat.
2016-06-03T02:16:00
[XDF]
XdfDataFormat
Status: *STABLE*
Represents a XDF DataFormat. XDF is a general-purpose container format for multi-channel time series data with extensive associated meta-information. XDF is tailored towards biosignal data such as EEG, EMG, EOG, ECG, GSR, MEG, etc.
XDF EEG data format
san:Actuation
san:ActuationValue
san:Actuator
san:ActuatorInput
san:Effect
2017-04-17
This ontology is based on the SSN Ontology by the W3C Semantic Sensor Networks Incubator Group (SSN-XG), together with considerations from the W3C/OGC Spatial Data on the Web Working Group.
Copyright 2017 W3C/OGC.
Sensor, Observation, Sample, and Actuator (SOSA) Ontology
sosa
http://www.w3.org/ns/sosa/
(general)
2017-12-11T01:49:00
https://w3id.org/BCI-ontology#
Status: *STABLE*
Instance that identifies the BCI Ontology (BCI-O) as a vocabulary used in the linked data cloud. Its identifier corresponds to the BCI-O namespace URI.
bci:
W3C/OGC Spatial Data on the Web Working Group
0.9.6
The DOLCE+DnS Ultralite ontology.
It is a simplification of some parts of the DOLCE Lite-Plus library (cf. http://www.ontologydesignpatterns.org/ont/dul/DLP397.owl).
Main aspects in which DOLCE+DnS Ultralite departs from DOLCE Lite-Plus are the following:
- The names of classes and relations have been made more intuitive
- The DnS-related part is closer to the newer 'constructive DnS' ontology (http://www.ontologydesignpatterns.org/ont/dul/cDnS.owl).
- Temporal and spatial relations are simplified
- Qualities and regions are more relaxed than in DOLCE-Full: they can be used as attributes of any entity; an axiom states that each quality has a region
- Axiomatization makes use of simpler constructs than DOLCE Lite-Plus
- The architecture of the ontology is pattern-based, which means that DOLCE+DnS Ultralite is also available in modules, called 'content ontology design patterns', which can be applied independently in the design of domain ontologies (cf. http://www.ontologydesignpatterns.org). If many modules are needed in a same ontology project, it is anyway useful to use this integrated version.
The final result is a lightweight, easy-to-apply foundational ontology for modeling either physical or social contexts.
Several extensions of DOLCE+DnS Ultralite have been designed:
- Information objects: http://www.ontologydesignpatterns.org/ont/dul/IOLite.owl
- Systems: http://www.ontologydesignpatterns.org/ont/dul/SystemsLite.owl
- Plans: http://www.ontologydesignpatterns.org/ont/dul/PlansLite.owl
- Legal domain: http://www.ontologydesignpatterns.org/ont/dul/CLO/CoreLegal.owl
- Lexical and semiotic domains: http://www.ontologydesignpatterns.org/ont/lmm/LMM_L2.owl
- DOLCE-Zero: http://www.ontologydesignpatterns.org/ont/d0.owl is a commonsense-oriented generalisation of some top-level classes, which allows to use DOLCE with tolerance against ambiguities like abstract vs. concrete information, locations vs. physical artifacts, event occurrences vs. event types, events vs. situations, qualities vs. regions, etc.; etc.
DOLCE+DnS Ultralite
4.1
Created by Aldo Gangemi as both a simplification and extension of DOLCE and Descriptions and Situations ontologies.
In 3.2, the links between instances of Region or Parameter, and datatypes have been revised and made more powerful, in order to support efficient design patterns for data value modelling in OWL1.0.
Also, the names of the related properties have been changed in order to make them more intuitive.
Furthermore, a large comment field has been added to the 'expresses' object property, in order to clarify some issues about the many interpretations.
In 3.3, the relation between regions, parameters, and datatypes has been still improved.
In 3.5, the person-related classes have been refactored: Person in 3.4 is now SocialPerson, to avoid confusion with commonsense intuition; Person is now the union of social persons and humans, therefore being a subclass of Agent.
In 3.6, other fixes on universal restriction involving expresses. Also added the property 'isConstraintFor' between parameters and entities. Moved the properties: 'assumes' and 'adopts' to the new module: http://www.ontologydesignpatterns.org/ont/dul/Conceptualization.owl.
In 3.7, some fixes on the names of classes and properties related to FormalEntity; created a new separate module for general universal restrictions (DULGCI.owl).
In 3.8, more fixes on the interface to formal entities and links to IOLite.owl.
In 3.9, some naming and comment fixes.
In 3.10, removed cardinality restriction from hasPart and isPartOf restrictions (changed to hasComponent and isComponentOf), for OWL(DL) compatibility. Also enlarged the range of includesAgent to contain both social and physical agents, and of conceptualizes universal restriction on agents, to include all social objects.
In 3.11, some more subproperty axioms have been introduced, and all elements have got English labels.
In 3.12, added some classes to map some old DolceLitePlus classes that were used to align OntoWordNet.
In 3.13, added the LocalConcept class to express a Concept that cannot be used in a Description different from the one that defines it. Also updated some comments.
In 3.14, added some comments.
In 3.15, removed some owl:disjointWith axioms relating Collection to InformationObject, Description, Situation, and SocialAgent. The rationale for doing that is to allow less strict constraints on domain relations involving collections that can be also conceived as descriptions, situations, social agents, or information objects; for example: a collection of sentences from a text (an information object) that are ranked with a relevance criterion can be still considered a text.
In 3.16, name of isActedBy changed to actsThrough, which is clearer. Also added SpatioTemporalRegion as constituted by a SpaceRegion and a TimeInterval.
In 3.17, removed redundant universal axioms from Entity and other top classes. Fixed restrictions on FunctionalSubstance class, and comments in Design and Substance classes.
In 3.18, removed subClassOf axiom from FunctionalSubstance to DesignedArtifact, created a new subclass of FunctionalSubstance, called DesignedSubstance, and created a subClassOf axiom from DesignedSubstance to DesignedArtifact.
In 3.19, removed disjointness axiom between Concept and Collection (the same rationale applies as in 3.15 version.
In 3.20, revised the comment for Quality, added InformationEntity as the superclass for InformationObject and InformationRealization (represented as the union of those classes). This is needed in many domain ontologies that do not need to distinguish between abstract and concrete aspects of information entities. One possible revision (not implemented here) would be to introduce the relations: expresses and isAbout with a broader domain:InformationEntity, and two more specific properties: abstractlyExpresses and isAbstractlyAbout. This last revision has not been implemented yet, since a large revision procedure should be carried out in order to check the impact of the revision on the existing DOLCE-DnS-Ultralite plugins.
In 3.21, added comment to InformationEntity, and optimized representation of equivalence for InformationRealization.
In 3.22, added comment to Personification.
In 3.23, added associatedWith object property, and put all object properties as subproperties of it.
In 3.24, removed hasProxy datatype property.
In 3.25, generalized domain and range of hasComponent and isComponentOf.
In 3.26, updated some comments in order to clarify or exemplify the concepts.
In 3.27, added rdfs:isDefinedBy annotations for Linked Data browsers.
In 3.28, broadened the universe of pre-/post-conditions to give room to events and states.
In 3.29, added some properties to support DBpedia alignment: sameSettingAs (situational analogous to coparticipation), including relations originating e.g. from sharing kinship, ownership, or roleplaying situations.
In 3.30, completed some domains and ranges (formerly owl:Thing, now dul:Entity), and added axiom: Organism subClassOf PhysicalAgent.
In 3.31, added a restriction to Quality and one to Region in order to ensure the original DOLCE constraint of qualities being always associated with a region, and vice versa. These axioms do not however exclude a direct applicability of qualities or regions to any other entity.
In 3.32, removed redundant union axioms and some restrictions, which spot a negative trade-off between expressivity and complexity.
In 3.33, added the ObjectAggregate class, added two property chains for coparticipation and same situation setting, updated some comments, added an axiom to Transition.
In 3.34, extended mereological support for parthood, introducing hasPropertPart (transitive) as a middle property between hasPart (transitive and reflexive) and hasComponent (asymmetric). This solution uses then "reflexive reduction" and "transitive reduction" design patterns (they allow to grant property characteristics through the superproperties, but not in the subproperties). Technically, mereology axioms would require that also hasProperPart be asymmetric, however a direct subproperty of an OWL non-simple property (hasPart) cannot be also asymmetric, hence the approximation.
Added a n-ary parthood class in order to suggest an alternative pattern for time- (and space-)indexed part relations. In order to ensure that property characteristics hold also with parthood n-ary, a property chain is introduced which infers a direct dul:partOf property for each parthood individual.
Added a dul:realizesSelfInformation propery in order to enable local reflexivity ('Self') axioms for all information realizations.
In 4.0, some foundational changes are introduced.
- Firstly, the temporally indexed versions of some properties are introduced as subclasses of Situation (following the n-ary relation pattern), so covering relations from DOLCE that were skipped because of their larger arity. -
- Secondly, D&S's Situation class is extracted from DOLCE top-level distinctions (it used to be a subclass of SocialObject), put as a primitive class under Entity, and not disjoint from any other class. Since we are relaxing the semantics of Situation, this change is fully compatible with previous versions of DUL.
The reason for the change is that it may sound counterintuitive (as many have noticed) to assume a descriptive commitment for situations, but not for events or states.
In fact, D&S provides an epistemological commitment to an ontology, independently from its foundational distinctions. A situation operationalizes that epistemology, and it is better not to put it under any foundational distinction (event, object, fluent, etc.), leaving to the designer whether to use descriptions as epistemological lenses, and so generating a situation, or not.
A consequence is that any entity, when 'framed' by (satisfying) a description, becomes a situation. We can still model entities as being in a situation's setting, and classified by a concept defined in a description.
In 4.1, also the disjointness between Description and Concept has been dropped, in order to unify the projections of an intensional relation with its mereological dependencies. Until now, when a description d1 is part of another description d, we cannot model d1 as a concept defined or used by d, even though it is totally reasonable to consider it as such, i.e., playing a role in d. The impediment is due to the disjointness between being Description and Concept. By dropping that axiom, we can operate on descriptions and concepts more flexibly.
A compositional property, hasInScope, is introduced to link situations that are 'diagonally' modelled through a description, e.g., when a situation s1 involves a description d1 (diagonal meta-level), which is satisfied by a situation s2, s1 hasInScope s2.
Legal reasoning is a relevant example: a case in point (s2) may satisfy two legal norms (d1 and d2) that are conflicting according to a meta-norm d; d3 can be satisfied by a situation s that 'interprets' (involves) both d1 and d2 against s2. hence, s hasInScope s2. This may also happen wehen the two conflicting descriptions are satisfied by alternative situations sharing important elements, as in perspectival reasoning: the meta-description in this case has in scope both alternative situations.
Dave Beckett
Nikki Rogers
Participants in W3C's Semantic Web Deployment Working Group.
Alistair Miles
Sean Bechhofer
An RDF vocabulary for describing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, 'folksonomies', other types of controlled vocabulary, and also concept schemes embedded in glossaries and terminologies.
SKOS Vocabulary
https://prezi.com/embed/cfcvd0wnx1uy/?bgcolor=fffffflock_to_path=0autoplay=0autohide_ctrls=0landing_data=bHVZZmNaNDBIWnNjdEVENDRhZDFNZGNIUE43MHdLNWpsdFJLb2ZHanI5a2dPYjZRcFFhczZBQUNvMWVwa0g4ajV3PT0landing_sign=1qEsK_JTVQmufpQJeDQUVGGXF87qe1jYl0MsFelGZww
01-a-overview-modules.png <show/>
01-core-interaction-model(subject-context).png <show/>
01-core-structure.png <show/>
02-complete-UML-diagram.png <show/>
03-webvowl-preview.png
04-OWLGrEd-preview.png
The BCI ontology specifies a foundational metadata model set for real-world multimodal Brain-Computer Interaction (BCI) data capture activities.Its structure depicts a conceptual framework that BCI applications can extend and use in their implementations, to define core concepts that capture a relevant and interoperable metadata vocabulary. This ontology is aligned to the Semantic Sensor Network Ontology (SSN): a domain-independent and end-to-end model for sensor/actuator applications. Hence, its structure has been normalized to assist its use in conjunction with other ontologies or linked data resources to specify any particular definitions (such as units of measurement, time and time series, and location and mobility), that specialized applications in the BCI domain might need. Also, this spec provides general alignment data modeling guidelines for core concepts, to help BCI applications in their design.
2014-03-27
Sergio José Rodríguez Méndez. Pervasive Embedded Technologies Laboratory (PET Lab), Computer Science Department, NCTU, Taiwan. John K. Zao. Pervasive Embedded Technologies Laboratory (PET Lab), Computer Science Department, NCTU, Taiwan and CerebraTek, Taiwan.
The BCI ontology (BCI-O) provides a high level semantic structure specialized metadata vocabulary set for real-world multimodal BCI data capture activities. It defines a minimalist and simple abstract metadata foundational model for real-world BCI applications that monitors human activity in any scenario. BCI multimodal domain applications are encouraged to extend and use this ontology in their implementations. BCI-O was developed following W3C Semantic Web ontology standards and guidelines, so that BCI applications can express reusable, interoperable and extendable machine-readable BCI metadata models, especially in pervasive M2M environments. For this purpose, its design was aligned to the Semantic Sensor Network Ontology (SSN), following closely its Stimulus-Sensor-Observation Ontology Design Pattern. The core set of relevant metadata definitions for real-world BCI activities were taken from different proposed vocabularies and formats in the BCI domain, such as: XDF, ESS and HED. BCI-O concepts are logically grouped into 11 modules. Each module represents a central topic of the ontology structure where the related concepts give a consistent explanation about its functional data model. The modules are: (*) Subject: concepts related to the depiction of a human being (or human subject) engaging in an activity and its associate state. (*) Context: captures the architectural description of an environment (or context). A human being interacts with a context. (*) Session: represents the interaction between a subject and a context while performing a single activity, under specific settings and conditions. (*) Observations (was SSN-Skeleton): specific concepts for BCI activities aligned to the SOSA/SSN axioms for modeling Observations (the initial alignment was to the Skeleton of [oldSSN]). Metadata related to records, modality types (such as EEG), channeling information, output streams (file formats and access) and stimulus events, are found in this module. (*) Sensors (was SSN-Device): specific related concepts for BCI activities aligned to the SOSA/SSN axioms for modeling Sensors (under Observations) (the initial alignment was to the Device module of [oldSSN]). Metadata related to devices and their channeling specification are found in this module. (*) System Capabilities (was SSN-MeasurementCapability): specific related concepts for BCI activities aligned to the SSN horizontal segmentation module for System Capabilities (the initial alignment was to the Measurement Capability module of [oldSSN]). Metadata related to channels and other measurement properties are found in this module. (*) Results (was SSN-Data): specific related concepts for BCI activities aligned to the SOSA axioms for modeling Results (the initial alignment was to the Data module of [oldSSN]). Metadata related to data blocks, recorded data, and actuation results are defined in this module. (*) data tagging). (*) Actuation: specific related concepts for BCI activities aligned to the SOSA axioms and SAN axioms for modeling Actuations. Similarly as described in [Seydoux2016], this module depicts how a subject can interact with the physical/virtual world (context) in BCI activities. Its main classes, actuator and actuation, are modeled following the Actuation-Actuator-Effect (AAE) design pattern: a core model for the IoT Ontology (IoT-O). (*) Descriptor: a descriptor represent an external resource set that extends the description of entities in the ontology. A descriptor complements the information associated to the relevant metadata set, defined in this ontology. (*) EEG: specific concepts for EEG (Electroencephalography) applications. As one of the ontologies ("concept producers") that reuse SSN, BCI-O was selected as part of the analysis on the usage of SSN. This spec has been registered in the following open repositories: [<caption>Open Repositories where BCI-O can be accessed</caption> [ [ <th style="width: 25%; text-align: center; Repository <th style="width: 50%; text-align: center; Entry <th style="width: 25%; text-align: center; Description ] ] [ [ [w3id.org github] [https://github.com/perma-id/w3id.org/tree/master/BCI-ontology] [ Permanent URI for the WWW] ] [ [Linked Open Vocabularies] [http://lov.okfn.org/dataset/lov/vocabs/bci] [ LOD community] ] [ [BioPortal] [http://bioportal.bioontology.org/ontologies/BCI-O] [ BioMedical community] ] ] ] Some early BCI-O applications are presented at <a class="other the end of the spec.
https://w3id.org/BCI-ontology#
2014-03-27
2018-06-07T23:54:36
(*) [Compton2009] Compton, M.; Neuhaus, H.; Taylor K. and Tran, K. Reasoning about Sensors and Compositions. In Proceedings of the 2nd International Workshop on Semantic Sensor Networks (SSN 09) at ISWC 2009, pp. 33-48, 2009. URL=http://ceur-ws.org/Vol-522/p7.pdf. (*) [ESS] SCCN, "EEG Study Schema (ESS)". Resources: [ESS@SCCN], [ESS v2.0]. (*) [HED] N. Bigdely-Shamlo, K. Kreutz-Delgado, M. Miyakoshi, M. Westerfield, T. Bel-Bahar, C. Kothe and K. Robbins, "Hierarchical event descriptor (HED) tags for analysis of event-related EEG studies". Resources: [HED@SCCN], [HED v2.0]. (*) [OWL-Time] OGC W3C, "Time Ontology in OWL (W3C Recommendation 19 October 2017)", https://www.w3.org/TR/owl-time/ (*) [Seydoux2016] Seydoux, Nicolas; Drira, Khalil; Hernandez, Nathalie; Monteil, Thierry. IoT-O, a Core-Domain IoT Ontology to Represent Connected Devices Networks. 20th International Conference on Knowledge Engineering and Knowledge Management - Volume 10024 (EKAW 2016). Bologna, Italy. 2016. Pp. 561-576. DOI=http://dx.doi.org/10.1007/978-3-319-49004-5_36. URL=https://dl.acm.org/citation.cfm?id=3092997. See also: (*) [SAN] The "Semantic Actuator Network (SAN)" ontology. http://lov.okfn.org/dataset/lov/vocabs/SAN. (*) [AAE] The "Actuation-Actuator-Effect (AAE)" design pattern. http://ontologydesignpatterns.org/wiki/Submissions:Actuation-Actuator-Effect http://ontologydesignpatterns.org/wiki/Submissions:Actuation-Actuator-Effect. (*) [Shafer2001] Shafer, Steven A. N.; Brumitt, Barry; Cadiz, J. J. Interaction Issues in Context-aware Intelligent Environments. Human-Computer Interaction. Volumen 16, Issue 2 (December 2001), Pp. 363-378. DOI=http://dx.doi.org/10.1207/S15327051HCI16234_16. URL=http://dl.acm.org/citation.cfm?id=1463124. (*) [SSN] OGC W3C, "Semantic Sensor Network (W3C Recommendation 19 October 2017)", https://www.w3.org/TR/vocab-ssn/.W3C Spatial Data on the Web Working Group, "W3C Spatial Data on the Web Working Group", https://www.w3.org/2015/spatial/wiki/Main_Page. (*) [oldSSN] W3C, "Semantic Sensor Network (SSN) Ontology", http://www.w3.org/2005/Incubator/ssn/ssnx/ssn.html.W3C Semantic Sensor Network Incubator Group, "Semantic Sensor Network XG Final Report", http://www.w3.org/2005/Incubator/ssn/XGR-ssn-20110628/. (*) [Unity] Unity Gaming Platform (http://unity3d.com/), Unity's Gaming Modeling Architecture Manual http://docs.unity3d.com/Manual/index.html (*) [XDF] C. Kothe and C. Brunner, "XDF (Extensible Data Format)", https://code.google.com/p/xdf/
Copyright 2014 - 2018 PET Lab, Computer Science Department, NCTU, Taiwan.
http://www.essepuntato.it/lode/owlapi/https://w3id.org/BCI-ontology# (visualise the BCI-ontology with LODE)
Brain-Computer Interaction (BCI) Ontology
bci
https://w3id.org/BCI-ontology#
The BCI ontology describes a framework of core concepts of the specialized metadata set for multimodal "Brain-Computer Interaction" (BCI) data capture activities. It is being developed by the "Pervasive Embedded Technologies" Laboratory (PET Lab) at the Computer Science Department of the National Chiao Tung University (NCTU), Taiwan (Republic of China, R.O.C). Its concepts and structure depict a foundational metadata model for BCI data capture activities, that BCI applications can extend and use in their implementations. Any feedback is welcome. Please mail it to srodriguez@pet.cs.nctu.edu.tw
0.9.5,0.9.4,0.9.3,0.9.2,0.9.1,0.8.9,0.7.5,0.6.1
0.9.6
Mappings to SAN: The Actuation Model of BCI-O was developed based on the following premises: (*) Aims to integrate and reconcile the SOSA axioms [SSN] and SAN axioms [SAN] for modeling actuations and actuators. (*) Follows closely the proposed Actuation-Actuator-Effect (AAE) design pattern [AAE]: a core model for the IoT Ontology (IoT-O). The following diagram depicts the BCI-O alignment to SAN. <DCMIType-StillImage>06-Actuation-Model-Alignment-to-SAN.jpg</DCMIType-StillImage> As a broad application domain ontology for BCI activities, BCI-O integrates and refines some modeling considerations of the SOSA and SAN concepts regarding actuations and actuators. One example is the ActuationEvent alignment to san:Effect (or san:ActuatorOutput): (*) A san:Effect defines any kind of physical modification (an effect on the physical world -- Context ) induced by an actuator (a characteristic of its nature, as an agent that has an effect on the physical world). (*) An ActuationEvent is a Context.Event triggered by an Actuator that changes the state of the ActuationTarget (which is a Context.Object). Another inherited modeling perspective for BCI, comes from the definition of san:impacts object property: [san:Effect] -------- (san:impacts) -------- [oldssn:Property] The BCI-O alignment to SAN allows the following inferred relationship: [ActuationEvent] ---- (san:impacts) ---- [ImpactedProperty] The SOSA-SAN integrated Actuation Model of BCI-O represents a major contribution to the IoT and BCI communities.
Mappings to SOSA/SSN: At the beginning, the BCI ontology was developed following closely its alignment to the [oldSSN] spec. By mid-July 2017, as the new version of SSN released by W3C ([SSN]) had reached the Candidate Recommendation <a title="Semantic Sensor Network Ontology | W3C Candidate Recommendation (11 July 2017) status, the core concepts were "remapped" to the new SOSA/SSN definitions based on its <a title="Semantic Sensor Network Ontology | W3C Recommendation (19 October 2017): (6) Vertical Segmentation -- (6.2) SSNX Alignment Module SSNX Vertical Alignment: [ [ [<th style="text-align: center;" width="100 bci<th style="text-align: center;" width="60 (mapping)<th style="text-align: center;" width="160 Initial mapping: oldssn<th style="text-align: center;" width="170 New mapping: SOSA/SSN<th style="text-align: center;" width="335 SSNX & BCI-O Remarks] [<th style="text-align: center;" colspan="5 Classes] [ [bci:Aspect] [] [oldssn:FeatureOfInterest] [sosa:FeatureOfInterest] [ (distinction between observation and actuation targets)] ] [ [bci:Modality] [] [oldssn:Property] [sosa:ObservableProperty] [(oldssn:Property ssn:Property);(sosa:ObservableProperty ssn:Property)] ] [ [bci:StimulusEvent] [] [oldssn:Stimulus] [ssn:Stimulus] [] ] [ [bci:Device] [] [oldssn:SensingDevice] [sosa:Sensor] [(sosa:Sensor oldssn:Sensor); (oldssn:SensingDevice oldssn:Sensor)] ] [ [bci:Record] [] [oldssn:Observation] [sosa:Observation] [oldssn:Observation: combination of oldssn axiomatic statements.] ] [ [bci:RecordedData] [] [oldssn:SensorOutput] [sosa:Result] [(oldssn:SensorOutput sosa:Result) and (combination of oldssn axiomatic statements); (alignment to sosa:Result)] ] [ [bci:DataBlock] [] [oldssn:ObservationValue] [sosa:Result] [(oldssn:ObservationValue sosa:Result) and (combination of oldssn axiomatic statements); (alignment removed)] ] [ [bci:Channel] [] [oldssn:MeasurementCapability] [ssn-system:SystemCapability] [(oldssn:MeasurementCapability ssn-system:SystemCapability) and (combination of oldssn axiomatic statements)] ] [ [bci:NonChannel] [] [oldssn:MeasurementCapability] [ssn-system:SystemCapability] [(oldssn:MeasurementCapability ssn-system:SystemCapability) and (combination of oldssn axiomatic statements)] ] [ [bci:SamplingRate] [] [oldssn:Frequency] [ssn-system:Frequency] [(oldssn:Frequency ssn-system:Frequency)] ] [ [bci:DeviceSpec] [] [oldssn:SensorDataSheet] [ ] [unchanged] ] [ [bci:Actuation] [] [ ] [sosa:Actuation] [new] ] [ [bci:Actuator] [] [ ] [sosa:Actuator] [new] ] [ [bci:ImpactedProperty] [] [ ] [sosa:ActuatableProperty] [new] ] [ [bci:ActuationResult] [] [ ] [sosa:Result] [new] ] [ [bci:ActuationTarget] [] [ ] [sosa:FeatureOfInterest] [new] ] [<th style="text-align: center;" colspan="5 Object Properties] [ [bci:hasModality] [] [oldssn:hasProperty] [ssn:hasProperty] [] ] [ [bci:isModalityOf] [] [oldssn:isPropertyOf] [ssn:isPropertyOf] [] ] [ [bci:aspectOfInterest] [] [oldssn:featureOfInterest] [sosa:hasFeatureOfInterest] [] ] [ [bci:madeRecord] [] [oldssn:madeObservation] [sosa:madeObservation] [] ] [ [bci:observedByDevice] [] [oldssn:observedBy] [sosa:madeBySensor] [] ] [ [bci:observedModality] [] [oldssn:observedProperty] [sosa:observedProperty] [] ] [ [bci:detects] [] [oldssn:detects] [ssn:detects] [] ] [ [bci:isProxyFor] [] [oldssn:isProxyFor] [ssn:isProxyFor] [] ] [ [bci:hasValue] [] [oldssn:hasValue] [sosa:hasResult] [ (deprecated to simplify the model)] ] [ [bci:isProducedByDevice] [] [oldssn:isProducedBy] [ ] [not defined in SOSA/SSN; deprecated. defined as the following role inclusion axiom: (bci:isObservationResultOf sosa:isResultOf) * (bci:observedByDevice sosa:madeBySensor) ⊆ (bci:isProducedByDevice)] ] [ [bci:observationResult] [] [oldssn:observationResult] [sosa:hasResult] [] ] [ [bci:forModality] [] [oldssn:forProperty] [ssn:forProperty] [] ] [ [bci:hasNonChannelData] [] [oldssn:hasMeasurementCapability] [ssn-system:hasSystemCapability] [oldssn:hasMeasurementCapability ssn-system:hasSystemCapability] ] [ [bci:observes] [] [oldssn:observes] [sosa:observes] [ in combination with the property-chain axioms] ] [ [bci:ofAspect] [] [oldssn:ofFeature] [ ] [not defined in SOSA/SSN; deprecated.] ] [ [bci:includesEvent] [] [DUL:includesEvent] [ ] [not used in SOSA/SSN; deprecated. A new property has been defined: ssn:wasOriginatedBy] ] ] ] Symbolic notation: [ [ [ [aligned to (subclass or sub-property of)] [] ] [ [equivalent concepts] [] ] [ [the new mapping implies a conceptual update] [] ] ] ] As for January 2018, due that all alignments were updated to SOSA/SSN core classes, the new mappings are based now on the <a title="Semantic Sensor Network Ontology | W3C Recommendation (19 October 2017): (6) Vertical Segmentation -- (6.1) Dolce-Ultralite Alignment Module Dolce-Ultralite Alignment Module of the Vertical Segmentation of [SSN]. Thus, the BCI ontology imports the ssn-dul definitions. As one of the ontologies ("<a title="OGC & W3C, On the usage of the SSN ontology (W3C Document): (3) Usage in ontologies (Producers) concept producers") that reuse SSN, BCI-O was selected as part of the analysis <a title="OGC & W3C, On the usage of the SSN ontology (W3C Document) on the usage of SSN.
Regarding Aspect and Modality: The importance and relationship between the concepts sosa:FeatureOfInterest (superclass of Aspect) and sosa:ObservableProperty (superclass of Modality) are shown and explained in ([oldSSN]): (*) (5.4.4 Agriculture Meteorology Sensor Network -- 5.4.4.1.1 Sensor selection), (*) (5.4.4 Agriculture Meteorology Sensor Network -- 5.4.4.2.2 CF (Climate and Forecast) ontologies), (*) (5.4.3 Wind sensor (WM30) -- 5.4.3.6 Wind Feature and properties), (*) (Wind Sensor example -- Wind Sensor example: Feature of interest). As a SSN domain application ontology, these concepts are defined and adjusted properly for describing the nature of BCI activities observations.
Regarding EEG concepts: This ontology EEG concepts: (*) EegModality. (*) EegNonChannel. (*) EegDevice. (*) EegRecord. (*) EegChannel. If necessary, BCI applications may define a set of restrictions and specialized connections (subproperties) for the relations among the EEG concepts.
Regarding the Procedure concept: [SSN] defines a general concept about Procedures, which encompasses any kind of Observations and Actuations. Thus, it fits properly the domain of BCI data capture activities. Following ontology engineering good practices, and given that there is not a specific description of Procedures for BCI data capture activities, this ontology does not define any new concept for them. Therefore, BCI-O applications that needs to model "Procedure" into their metadata definitions, should align directly to the concept sosa:Procedure. It is worth noting that this practice applies to any other high-level concepts that BCI-O applications might include into their vocabulary.
Regarding the treatment of measurement units: This ontology leaves open to BCI applications, the way how they should handle the semantic expressiveness level of measurement units. In general, there are two possible ways (based on their data requirements): (*) data type properties (with its previously-known units of measurements), implies that it's not necessary to incorporate into their ontology's definition a semantic structure to describe properly units of measurement. (*) For semantic-centric applications: it's necessary to incorporate into their ontology's definition a structure to describe properly units of measurement, depending on their required semantic level of expressiveness. The vast majority of BCI applications are heavily data-centric. Well-known measurement units for a wide range of metadata attributes are used in different specs (such as [XDF] and [ESS]), e.g., pixels, mm, degrees and microvolts. For them, defining a relevant data type property set without specifying measurement units is suffice. For BCI applications that require a proper semantic expressiveness level of measurement units, this ontology provides the following guideline: (*) The BCI concepts that are subject to be extended are those related to Device and Record (including the channeling spec definitions). (*) From the perspective of the BCI ontology alignment to the Stimulus-Sensor-Observation Ontology Design Pattern, the core SSN concepts that "map" to quantities (and, therefore, to units of measurement) are ssn-system:SystemCapability and ssn-system:SystemProperty. Thus, BCI applications should pay special attention to extend the semantic structure of the concepts: (*) Channel, and (*) NonChannel. (*) BCI applications should extend the BCI ontology based on the [oldSSN] guidelines, explained in: (*) (5.3.10 Data -- 5.3.10.2 How to attach a data value to a property?), (*) (5.3.13 Energy -- 5.3.13.3 How to represent a WSN node with information about its energy consumption), (*) (5.4.2 Smart product example -- 5.4.2.2 Sensor), (*) (5.4.2 Smart product example -- 5.4.2.3 Measurement capabilities), (*) (5.4.2 Smart product example -- 5.4.2.4 Observation), (*) (5.4.4 Agriculture Meteorology Sensor Network -- 5.4.4.1.3 Sensor view), (*) (5.4.4 Agriculture Meteorology Sensor Network -- 5.4.4.2.3 Units of measurement and quantity ontologies). (*) It's recommended to incorporate the semantic extensions through their alignment to a proper ontology for units of measurement. Recommended ontologies for units of measurement are: (*) QUDT - Quantities, Units, Dimensions and Data Types Ontologies, developed by the NExIOM project (NASA, TopQuadrant). (*) Ontology of units of Measure (OM): om-1.8.2, developed by a team of researchers at Wageningen UR (wurvoc.org).
Actuation: Automated Wheelchair: The following use case presents an example that depicts how to define related BCI-O actuation concepts. Wheelchair driving scenario: (*) Purpose: use an actuator capable to control a wheelchair based on the input from a BCI/EEG record (obtained directly from the subject's head). (*) Description: Alice is driving a wheelchair throughout a human interface composed of three major components: (*) An EEG sensor capable of reading brain signals. (*) A computing system capable to process and analyze (classify) the brain signals collected from the EEG sensor. (*) An actuator capable to control the wheelchair's movement (such as direction and acceleration) based on the input from (2). The actuator is a device that works in the following way: (*) The processed brain signals issue specific movement commands to the actuator, such as: (*) Command: "slow down" with: (*) Direction: go forward (no change in the direction). (*) Acceleration: -10.5 cm/s2 (change in the speed). (*) The actuator mechanism: (*) Implements the procedure (actuation) to control the wheelchair. (*) It triggers a series of steps aimed to change the wheelchair's state: to decelerate its wheels. The modeled BCI-O concepts involved in this scenario, excluding those from the observation component (except for EEG-Record and EEG-Device), are listed below: <ul style="list-style-type: square; (*) Subject (x1): "Alice" (*) Session (x1): "a situation where the observation (EEG recording) and actuation happened" (*) Activity (x1): "controlling the automated wheelchair" (*) Context (x1): "at home" (*) Context.Scene (x1): "specific indoors situation" (*) EegRecord (x1): "observation of the EEG record that serves as the input of the actuations" (*) EegDevice (x1): "EEG device that made the EEG recordings" (*) Command (x1): "slow down : (EEG record) -- actuators" (*) Actuator (x2): "the devices that peform the actuations" (*) Actuation (x2): "the procedures that change the state of the wheels via actuators" (*) ImpactedProperty (x2): "the speed of the wheels (their state)" (*) ActuationEvent (x2): "reduce the speed of wheels". (*) ActuationResult (x2): "slowing down" ("the effect of decelerating the wheels"). (*) ActuationTarget (x2): "rear wheels" (x2). (*) Context.Object (composite) (x1): "wheelchair". (*) Context.Method (x1): "deacceleration of a wheel". <DCMIType-StillImage>05-UserCase-Actuation.jpg</DCMIType-StillImage> An RDF file containing a graph corresponding to this example is available. <div class="example <div class="example-title marker Actuation: Automated Wheelchair</div> <pre class="hljs xml" aria-busy="false" aria-live="polite @prefix : http://example.org/data/ . @prefix rdf: http://www.w3.org/1999/02/22-rdf-syntax-ns# . @prefix rdfs: http://www.w3.org/2000/01/rdf-schema# . @prefix schema: http://schema.org/. @prefix time: http://www.w3.org/2006/time#. @prefix sosa: http://www.w3.org/ns/sosa/ . @prefix ssn: http://www.w3.org/ns/ssn/ . @prefix bci: https://w3id.org/BCI-ontology# . @prefix xsd: http://www.w3.org/2001/XMLSchema# . @prefix qudt-1-1: http://qudt.org/1.1/schema/qudt# . @prefix qudt-unit-1-1: http://qudt.org/1.1/vocab/unit# . @base http://example.org/data/ . # Alice performs a "controlling-wheelchair" activity "at home" (scene labeled as: "indoors-X3" #4). # The context scene indoors-X3 #4 has the wheelchair as part of its structure composition. Alice rdf:type bci:Subject . controlling-wheelchair rdf:type bci:Activity . context/at-home/6 rdf:type bci:Context ; bci:hasScene scene/indoors-X3/4 . scene/indoors-X3/4 rdf:type bci:Context.Scene ; bci:hasObject wheelchair . # The wheelchair has a left-rear-wheel (#47) and a right-rear-wheel (#39), which are the actuations targets. # All of them are context objects. wheelchair rdf:type bci:Context.Object ; bci:hasObject left-rear-wheel/47 ; # bci:ActuationTarget defined below bci:hasObject right-rear-wheel/39 . # bci:ActuationTarget defined below # The session is titled "an actuation example". # The session observes an EEG-Record and covers 2 actuations (one for each rear wheel). session/91 rdf:type bci:Session ; bci:hasTitle "an actuation example" ; bci:isSessionOf Alice, context/at-home/6 ; bci:hasActivity controlling-wheelchair ; bci:hasRecord EegRecord/46 ; bci:hasActuation actuation/62, actuation/63 . # Alice's EEG-Record is observed by EEG-device ctnp-A128 #5. # This is the input for the command to "slow down" #11 that initiates the execution of the actuators. # All the details about the recordings' data and results are not shown in this graph. EegRecord/46 rdf:type bci:EegRecord ; bci:observedByDevice ctnp-A128/5 ; bci:isInputFor cmd/slowDown/11 . # EegDevice ctnp-A128 #5 observes the EEG recordings of Alice. ctnp-A128/5 rdf:type bci:EegDevice ; bci:madeRecord EegRecord/46 . # The SlowDown command defines two associated values: # - direction: in this scenario, its value is "forward" (implies no change in this state's axis). # - acceleration: in this scenario, its value is -10.5 cm/s2 (implies a change in this state's axis). # The SlowDown command #11 gives to the actuators servo4WC-ABC #1 and #2, their entry point for execution, # based on the input received from the EEG recordings #46. :SlowDown rdfs:subClassOf bci:Command . :direction a owl:ObjectProperty ; schema:domainIncludes :SlowDown ; schema:rangeIncludes rdfs:Literal . :acceleration a owl:ObjectProperty ; schema:domainIncludes :SlowDown ; # bci:Command schema:domainIncludes :slowingDown ; # bci:ActuationResult schema:domainIncludes :accelerateWheel ; # bci:Context.Method schema:rangeIncludes qudt-1-1:QuantityValue . _:acceleration-value rdf:type qudt-1-1:QuantityValue ; qudt-1-1:numericValue "-10.5"^^xsd:double ; # deacceleration qudt-1-1:unit qudt-unit-1-1:CentimeterPerSecondSquared . cmd/slowDown/11 rdf:type :SlowDown ; :direction "forward" ; :acceleration _:acceleration-value ; bci:consumesInputFrom EegRecord/46 ; bci:isExecutedBy servo4WC-ABC/1, servo4WC-ABC/2 . # servo4WC-ABC #1 made actuation #62, and servo4WC-ABC #2 made actuation #63: # both execute the command to "slow down" #11. # The model says that: # - servo4WC-ABC/1 is designed to automatically change the speed of the right rear wheel. # - servo4WC-ABC/2 is designed to automatically change the speed of the left rear wheel. # Each actuator triggers an event to reduce the speed of the wheel that it is bound to. servo4WC-ABC/1 rdf:type bci:Actuator ; sosa:madeActuation actuation/62 ; ssn:forProperty right-rear-wheel/39#speed ; bci:triggers ae/reduceSpeed/81 . servo4WC-ABC/2 rdf:type bci:Actuator ; sosa:madeActuation actuation/63 ; ssn:forProperty left-rear-wheel/47#speed ; bci:triggers ae/reduceSpeed/82 . # The rear wheels are the actuation targets (for each correspondent actuation procedure). # Each wheel's speed state is an ImpactedProperty. The model allows to explicitly say that: # - left-rear-wheel/47#speed is a property of left-rear-wheel/47 # - right-rear-wheel/39#speed is a property of right-rear-wheel/39 left-rear-wheel/47 rdf:type bci:ActuationTarget ; ssn:hasProperty left-rear-wheel/47#speed . right-rear-wheel/39 rdf:type bci:ActuationTarget ; ssn:hasProperty right-rear-wheel/39#speed . left-rear-wheel/47#speed rdf:type bci:ImpactedProperty ; sosa:isActedOnBy actuation/63 . right-rear-wheel/39#speed rdf:type bci:ImpactedProperty ; sosa:isActedOnBy actuation/62 . # Actuation #62 acted on the state (speed) of right-rear-wheel #39, # and returned "slowing down" #788 as its associated result. # Actuation #63 acted on the state (speed) of left-rear-wheel #47, # and returned "slowing down" #789 as its associated result. # Each actuation has a timestamp for its associated result. actuation/62 rdf:type bci:Actuation ; sosa:actsOnProperty right-rear-wheel/39#speed ; sosa:actuationMadeBy servo4WC-ABC/1 ; sosa:hasResult ar/slowingDown/788 ; sosa:phenomenonTime _:actuation-time ; sosa:resultTime "2018-05-06T20:05:13+00:00"^^xsd:dateTimeStamp . actuation/63 rdf:type bci:Actuation ; sosa:actsOnProperty left-rear-wheel/47#speed ; sosa:actuationMadeBy servo4WC-ABC/2 ; sosa:hasResult ar/slowingDown/789 ; sosa:phenomenonTime _:actuation-time ; sosa:resultTime "2018-05-06T20:05:13+00:00"^^xsd:dateTimeStamp . # The time interval of the actuations: _:actuation-time rdf:type time:Interval ; time:hasBeginning [ rdf:type time:Instant ; time:inXSDDateTimeStamp "2018-05-06T20:05:12+58:00"^^xsd:dateTimeStamp ] ; time:hasEnd [ rdf:type time:Instant ; time:inXSDDateTimeStamp "2018-05-06T20:05:13+02:00"^^xsd:dateTimeStamp ] . # The slowingDown actuation result defines a value for the acceleration (change of speed): see above... # // :acceleration schema:domainIncludes :slowingDown ; // :slowingDown rdfs:subClassOf bci:ActuationResult . # The actuation results of "slowing down" #788 and #789, are defined in the following way: # * Both "slowing down" #788 and #789 are the actuation results of actuations #62 and #63 respectively. # * They have the associated values: # - change of speed: the deacceleration of 10.5 cm/s2. # * Both "slowing down" #788 and #789 involve the event of "reduce speed" #81 and #82 respectively. ar/slowingDown/788 rdf:type :slowingDown ; :acceleration _:acceleration-value ; bci:involves ae/reduceSpeed/81 . ar/slowingDown/789 rdf:type :slowingDown ; :acceleration _:acceleration-value ; bci:involves ae/reduceSpeed/82 . # The actuation events (#81 and #83) change the state of the actuation targets (the rear wheels), # throught the "accelerateWheel" (context method) deacceleration effectuation, which handles the acceleration-value. # The "accelerateWheel" context method defines a "parameter" value: see above... # // :acceleration schema:domainIncludes :accelerateWheel ; // :accelerateWheel rdfs:subClassOf bci:Context.Method . deacceleration rdf:type :accelerateWheel ; :acceleration _:acceleration-value . ae/reduceSpeed/81 rdf:type bci:ActuationEvent ; bci:effectuates deacceleration ; bci:changes right-rear-wheel/39 . ae/reduceSpeed/82 rdf:type bci:ActuationEvent ; bci:effectuates deacceleration ; bci:changes left-rear-wheel/47 . </pre></div>
Observation Context: Visually Evoked Potential (VEP) Virtual Environment Navigation for Glaucoma Patients: The following use case presents an example that depicts how to define related BCI-O observation and context concepts. Observation Context: Visually Evoked Potential (VEP) Virtual Environment Navigation for Glaucoma Patients: (*) Purpose: to present flickering visual stimuli on a subset of objects in a virtual environment and determining whether the subject suffers vision loss at a particular region based on the EEG responses that he has on those stimuli. (*) Description: Bob is navigating towards a pillar (target) located at the center of the field whilst avoiding obstacles randomly placed on the path in-between. A subject suffering from glaucoma disease may not be able to see objects at a certain sector of his peripheral vision and hence might unknowingly hit upon those unseen objects. The major components in this scenario are: (*) An EEG sensor capable of capturing brain signals using certain electrode placement. (*) A computing system capable to process and analyze (classify) the brain signals collected from the EEG sensor. (*) A gameplay recording mechanism to record the pathway that the subject took to reach the target. (*) Event generation mechanism whenever the subject interacts with an object in the field or whenever a flickering stimulus is presented on his field of vision. The context (virtual environment) consists of the following objects: (*) An assortment of objects with different sizes typically found on a forest is randomly placed throughout the environment, e.g. stones, trees and pit holes. (*) A pillar (target) is placed in the middle of the field and flicker at an unnoticeably high frequency, e.g. 45 Hz. (*) A selection of objects will flicker at a high frequency that is lesser than the target frequency, e.g. 30 Hz - 40 Hz. (*) High flickering frequency is necessary in order to minimize uneasiness felt by the subjects'. Several events are generated whenever: (*) The subject hit upon an object (event data: location of the object). (*) The flickering object appears within the subject's field of vision (event data: flickering frequency). (*) The subject reaches the target. A 3D animation of this virtual environment is available. Experiment flow: (*) The objective is for the subject to walk towards the middle of the field without hitting on any objects placed on the fields. (*) Events are triggered and recorded whenever certain conditions are met. (*) A pathway that the subject took to reach the target, the location of the objects that he hit (if any), is recorded upon. Analysis mechanism: (*) These flickering objects will serve as an indicator as to whether the subject actually sees them within his peripheral vision field. They will induce an EEG response matching the flickering frequency if a subject actually sees them. (*) The event markers will aid the analyst to do selective epoch analysis based on the interests of the analysis, e.g. epochs in which the flickering objects is within a subject's field of vision, or epochs in which he hit an object. <DCMIType-StillImage>05-UserCase-Observation-Context.jpg</DCMIType-StillImage> An RDF file containing a graph corresponding to this example is available. <div class="example <div class="example-title marker Observation Context: Visually Evoked Potential (VEP) Virtual Environment Navigation for Glaucoma Patients</div> <pre class="hljs xml" aria-busy="false" aria-live="polite @prefix : http://example.org/data/ . @prefix rdf: http://www.w3.org/1999/02/22-rdf-syntax-ns# . @prefix rdfs: http://www.w3.org/2000/01/rdf-schema# . @prefix schema: http://schema.org/. @prefix time: http://www.w3.org/2006/time#. @prefix sosa: http://www.w3.org/ns/sosa/ . @prefix ssn: http://www.w3.org/ns/ssn/ . @prefix bci: https://w3id.org/BCI-ontology# . @prefix xsd: http://www.w3.org/2001/XMLSchema# . @prefix qudt-1-1: http://qudt.org/1.1/schema/qudt# . @prefix qudt-unit-1-1: http://qudt.org/1.1/vocab/unit# . @base http://example.org/data/ . # The 3 main sets of definitions: # (i) Virtual Environment (Context): objects, behaviour, events (stimuli). # (ii) Subject, Actions # (iii) Session, EEG Record, EEG sensor (channeling scheme). # Bob navegates in (interacts with) a virtual environment (context), labeled as ven4gp #1. # The context scene forest #1 depicts the architectural design of the objects' layout as part of its structural composition. # The session's activity is labeled "glaucoma-tracking #1". Bob rdf:type bci:Subject . glaucoma-tracking/1 rdf:type bci:Activity . context/ven4gp/1 rdf:type bci:Context ; bci:hasScene scene/forest/1 . scene/forest/1 rdf:type bci:Context.Scene ; bci:hasObject # Objects without flickering behavior: static-stone/1, static-stone/2, static-stone/3, static-stone/4, # ... associate here all the "static stones" in the scene. static-tree/1, static-tree/2, static-tree/3, static-tree/4, # ... associate here all the "static trees" in the scene. static-pit-hole/1, static-pit-hole/2, static-pit-hole/3, # ... associate here all the "static pit-holes" in the scene. # Objects with flickering behavior: flickering-stone/1, flickering-stone/2, flickering-stone/3, # ... associate here all the "flickering stones" in the scene. flickering-tree/1, flickering-tree/2, flickering-tree/3, flickering-tree/4, # ... associate here all the "flickering trees" in the scene. flickering-pit-hole/1, flickering-pit-hole/2, flickering-pit-hole/3, # ... associate here all the "flickering pit-holes" in the scene. # Target: pillar target/pillar/1 . # Below (1~9) are the main structural and functional component definitions for the virtual environment. # (1) The LocatedObject context object: defines 3 associated values that represents the object's location (coordinates). # - X-coor: coordinate in the X axis. # - Y-coor: coordinate in the Y axis. # - Z-coor: coordinate in the Z axis. # This class gives the "location" notion to all the participant objects in the scene. :LocatedObject rdfs:subClassOf bci:Context.Object . :X-coor a owl:ObjectProperty ; schema:domainIncludes :LocatedObject ; schema:rangeIncludes qudt-1-1:QuantityValue . :Y-coor a owl:ObjectProperty ; schema:domainIncludes :LocatedObject ; schema:rangeIncludes qudt-1-1:QuantityValue . :Z-coor a owl:ObjectProperty ; schema:domainIncludes :LocatedObject ; schema:rangeIncludes qudt-1-1:QuantityValue . # (2) The context objects that defines a flickering (and non-flickering) behavior: # - NonFlickeringObject: doesn't have any flickering mechanism (context method). # - LowerFrequencyFlickeringObject: has a constant flickering mechanism (context method) --30Hz--. # - TargetFrequencyFlickeringObject: has a constant flickering mechanism (context method) --45Hz--. :NonFlickeringObject rdfs:subClassOf bci:Context.Object . :LowerFrequencyFlickeringObject rdfs:subClassOf bci:Context.Object . :TargetFrequencyFlickeringObject rdfs:subClassOf bci:Context.Object . # (3) The located context object types of different nature that defines the notions of stones, trees and pit holes. :Stone rdfs:subClassOf :LocatedObject . :Tree rdfs:subClassOf :LocatedObject . :Pit-hole rdfs:subClassOf :LocatedObject . # (4) The classes of static objects that don't flicker. :StaticStone rdfs:subClassOf :NonFlickeringObject, :Stone . :StaticTree rdfs:subClassOf :NonFlickeringObject, :Tree . :StaticPit-hole rdfs:subClassOf :NonFlickeringObject, :Pit-hole . # (5) The classes of flickering objects that are not the target (Pillar). :FlickeringStone rdfs:subClassOf :LowerFrequencyFlickeringObject, :Stone . :FlickeringTree rdfs:subClassOf :LowerFrequencyFlickeringObject, :Tree . :FlickeringPit-hole rdfs:subClassOf :LowerFrequencyFlickeringObject, :Pit-hole . # (6) The target (Pillar). :Pillar rdfs:subClassOf :TargetFrequencyFlickeringObject, :LocatedObject . target/pillar/1 rdf:type :Pillar . # (7) The flickering mechanisms (types of context methods): # - flickeringAtLowerFrequency: a constant flickering mechanism of 30 Hz. # - flickeringAtTargetFrequency: a constant flickering mechanism of 45 Hz. # Both are bound to their correspondent objects. :hasFrequency a owl:ObjectProperty ; schema:domainIncludes bci:Context.Method ; schema:rangeIncludes qudt-1-1:QuantityValue . _:lower-freq-value rdf:type qudt-1-1:QuantityValue ; qudt-1-1:numericValue "30.0"^^xsd:double ; qudt-1-1:unit qudt-unit-1-1:Hertz . _:target-freq-value rdf:type qudt-1-1:QuantityValue ; qudt-1-1:numericValue "45.0"^^xsd:double ; qudt-1-1:unit qudt-unit-1-1:Hertz . flickeringAtLowerFrequency rdf:type bci:Context.Method ; bci:definesBehaviorOf :LowerFrequencyFlickeringObject ; :hasFrequency _:lower-freq-value . # a constant value flickeringAtTargetFrequency rdf:type bci:Context.Method ; bci:definesBehaviorOf :TargetFrequencyFlickeringObject ; :hasFrequency _:target-freq-value . # a constant value # (8) The stimuli events notion associated to the flickering methods. flickeringEvent rdf:type bci:Context.Event ; bci:effectuates flickeringAtLowerFrequency, flickeringAtTargetFrequency . # (9) The subject events (actions) and capability. walk rdf:type bci:Context.Capability . Bob bci:canPerform walk . :Action.HitObject rdfs:subClassOf bci:Action . :Action.DetectFlickeringObject rdfs:subClassOf bci:Action . :Action.ReachTarget rdfs:subClassOf bci:Action . # Examples on how Bob issues his related actions: # Bob bci:issues action/hit-object/344 # Bob bci:issues action/detect-flickering/96 # Below is an excerpt of the definitions for the assortment of objects that participate in the scene: static-stone/1 rdf:type :StaticStone ; :X-coor [ rdf:type qudt-1-1:QuantityValue ; qudt-1-1:numericValue "6.75"^^xsd:double ; qudt-1-1:unit qudt-unit-1-1:Meter ] ; :Y-coor [ rdf:type qudt-1-1:QuantityValue ; qudt-1-1:numericValue "2.15"^^xsd:double ; qudt-1-1:unit qudt-unit-1-1:Meter ] ; :Z-coor [ rdf:type qudt-1-1:QuantityValue ; qudt-1-1:numericValue "0.0"^^xsd:double ; qudt-1-1:unit qudt-unit-1-1:Meter ] . static-stone/2 rdf:type :StaticStone . # + X, Y, Z coordinates ; # static-stone/3 ... , static-stone/4 ... static-tree/1 rdf:type :StaticTree . # + X, Y, Z coordinates ; # static-tree/2 ... , static-tree/3 ... , static-tree/4 ... static-pit-hole/1 rdf:type :StaticPit-hole . # + X, Y, Z coordinates ; # static-pit-hole/2 ... , static-pit-hole/3 ... flickering-stone/1 rdf:type :FlickeringStone ; :X-coor [ rdf:type qudt-1-1:QuantityValue ; qudt-1-1:numericValue "5.05"^^xsd:double ; qudt-1-1:unit qudt-unit-1-1:Meter ] ; :Y-coor [ rdf:type qudt-1-1:QuantityValue ; qudt-1-1:numericValue "12.37"^^xsd:double ; qudt-1-1:unit qudt-unit-1-1:Meter ] ; :Z-coor [ rdf:type qudt-1-1:QuantityValue ; qudt-1-1:numericValue "0.0"^^xsd:double ; qudt-1-1:unit qudt-unit-1-1:Meter ] . flickering-stone/2 rdf:type :FlickeringStone . # + X, Y, Z coordinates ; # flickering-stone/3 ... flickering-tree/1 rdf:type :FlickeringTree . # + X, Y, Z coordinates ; # flickering-tree/2, flickering-tree/3 ... flickering-pit-hole/1 rdf:type :FlickeringPit-hole . # + X, Y, Z coordinates ; # flickering-pit-hole/2 ... , flickering-pit-hole/3 ... target/pillar/1 rdf:type :Pillar . # + X, Y, Z coordinates ; location: in the middle of the field. # The session observes an EEG-Record with its related settings. session/9 rdf:type bci:Session ; bci:hasTitle "Visually Evoked Potential (VEP) Virtual Environment Navigation for Glaucoma Patients" ; bci:isSessionOf Bob, context/ven4gp/1 ; bci:hasActivity glaucoma-tracking/1 ; bci:hasRecord EegRecord/18 . # Bob's EEG-Record is observed by the EEG-device eeg-dev #3. # The EEG observation is associated to the following metadata: # - the data recordings (result: RecordedData). # - the channeling schema used to collect the brain signals (RecordChannelingSpec). # - a time interval for its duration. # - a timestamp for its associated result. EegRecord/18 rdf:type bci:EegRecord ; bci:observedByDevice eeg-dev/3 ; bci:hasRecordChannelingSpec channeling-spec/18 ; bci:observationResult data-file/2 ; bci:aspectOfInterest vision-field-sensitivity ; bci:observedModality mfSSVEP ; ssn:wasOriginatedBy flickeringAtLowerFrequency, flickeringAtTargetFrequency ; sosa:phenomenonTime _:observation-time ; sosa:resultTime "2018-05-28T16:42:51+00:00"^^xsd:dateTimeStamp . # The time interval of the observation: _:observation-time rdf:type time:Interval ; time:hasBeginning [ rdf:type time:Instant ; time:inXSDDateTimeStamp "2018-05-28T16:40:08+00:00"^^xsd:dateTimeStamp ] ; time:hasEnd [ rdf:type time:Instant ; time:inXSDDateTimeStamp "2018-05-28T16:42:48+00:00"^^xsd:dateTimeStamp ] . # The core related descriptions about the BCI descriptive features are shown below. # EegDevice eeg-dev #3 observes the EEG recordings of Bob. eeg-dev/3 rdf:type bci:EegDevice ; bci:observes mfSSVEP ; bci:madeRecord EegRecord/18 . # The data recordings data-file #2 is the result of the observation. data-file/2 rdf:type bci:RecordedData ; bci:isProducedByDevice eeg-dev/3 . # The channeling scheme spec used in the observation. channeling-spec/18 rdf:type bci:RecordChannelingSpec ; bci:hasChannelData eeg-channel/1, eeg-channel/2, eeg-channel/3 . # ... associate here all the channels set for the recording. # The channels definitions used in the recordings. eeg-channel/1 rdf:type bci:EegChannel . # ... insert in here the channel's settings. eeg-channel/2 rdf:type bci:EegChannel . # ... insert in here the channel's settings. eeg-channel/3 rdf:type bci:EegChannel . # ... insert in here the channel's settings. # ... for all the defined channels. # The aspect and modality of the observation. vision-field-sensitivity rdf:type bci:NeurologicalAspect ; bci:hasModality eeg-dev/3 . mfSSVEP rdf:type bci:EegModality ; bci:hasChannelingSpec channeling-spec/18 . </pre></div>
BCI Ontology
CerebraTek Pod ontology (for an mfSSVEP visual stimuli pattern): As one of the earliest direct applications of BCI-O, this ontology specifies the relevant metadata vocabulary for BCI data capture activities using the CerebraTek Pod devices applied to glaucoma diagnostics using mfSSVEP (Steady-State Visually Evoked Potential with Vision Field Sensitivity). Its spec is published in http://bci.pet.cs.nctu.edu.tw/ontology?cerebratek_nupod.owl Its representational RDF graph, along with its alignments to BCI-O, is depicted in the following model: <DCMIType-StillImage>01-ctnp-RDF-graph-diagram.png</DCMIType-StillImage>
ESS+HED Standards Ontology for BCI-O: As the first BCI-O extension for the industry, this ontology specifies the relevant metadata vocabulary for BCI data capture activities for the ESS+HED Standards. Its main purpose is to provide a simple and compatible BCI-O based ontology for the ESS+HED Standards. Its spec is published in <a title="ESS+HED Standards Ontology for BCI-O http://bci.pet.cs.nctu.edu.tw/ontology?ESS_HED.owl Its representational RDF graph, along with its alignments to BCI-O, is depicted in the following model: <DCMIType-StillImage>01-esshed-RDF-graph-diagram.png</DCMIType-StillImage>
https://w3id.org/BCI-ontology#