Skip to PREreview

PREreview of A maturity model for catalogues of semantic artefacts

Published
DOI
10.5281/zenodo.8039371
License
CC BY 4.0

This is a very timely contribution. A lot of diverse terminology can be found around the topic of ‘semantic interoperability’. To give examples: Knowledge Organisation Systems, controlled vocabularies, ontologies, and semantic artefacts are used for the ‘objects’ or ‘assets’ themselves; while: catalogue, registries, repositories, lists ….. are used for means to gain an overview about those ‘objects’. One could say, while the community/ies around semantic interoperability are still ‘sorting out’ how to deal with the ‘objects’ properly, they are also in great need to sort out the vocabulary, trying to find a shared language. For this very needed step this paper is a great contribution.

Disclaimer: I write this pre-review being a member of the EOSC TaskForce Semantic interoperability. So, I’m bias as I’m familiar with some of the authors, and have been involved in discussions around the topic. Having said so, I have not read the paper prior to this review, and try to be as objective as possible.

The paper does two things. First, it presents a systematic literature review (section 2) executed on a collection of 15 key documents recommended by Taskforce members. Second, it analyses 26 catalogues. From the first step, the authors derive 12 so-called dimensions. Those dimensions are used in the analysis of the 26 catalogues. This way also testing if the right dimensions had been identified in the first place. One could say that the paper presents a ‘classification task’ together with a landscape analysis of how the classification is populated.

The paper uses various overviews (in table form) - very useful. The authors describe their workflow very clearly (sec 3). They basically have organised a collective close reading exercise. The raw data (the original lit collection) are shared - an attitude to which I can only congratulate the authors, as it makes their review more transparent.

Major issues

Having said this, there are some points, I think the paper could be even more improved, or points which could be made more clearer. I order them according of the flow of the paper (page-wise) although those points are sometimes smaller or more fundamental.

Page 2: Are semantic artefacts ‘objects’ or ‘tools’? I would like to read more about this. The first notion emphasis the character of a semantic artefact as a ‘thing’, the second emphasises the process around, which actually turn the ‘object’ into a semantic one.

Page 2: sentence “ A semantic artefact is a machine-actionable ….” Is this not a direct quotation from one of the FAIRisFAIR project DL’s ? - which are cited properly but at another place.

Page 3: prior to ‘Considering this context, the paper….’ Here I missed a sentence, giving the reader again the motivation why this paper is around catalogs, why is the catalog idea so central, why have the authors (despite of the long lists of services they mention before) decided to zoom into ‘catalogs’?

Page 4: This might be a TexThing but the Table 1 is placed prior to the definition of the ‘Me’”Op”, … (sec 2.3.) the dimensions which came out of the literature analysis. Moreover, I really missed how the authors detected/identified those dimensions. 1-2 more sentences would be great here. And later features of dimensions are introduced. Here, I got lost, how were those features defined - or where they just keywords each reviewer was free to chose? To summarise, the KOS/classification building process needs to be described in greater detail. In sec 3.4. you describe that you condensed the features, but the dimensions remained unchanged or? In sec 4.2. another term emerges ‘type’: so we have dimensions/features/types; this needs to be defined more clearly!

Page 6: I was wondering about ‘dimensions’ such as : size of the community using a catalogue, or the TechnologicalReadinessLevel; or the time of existence. Some of those aspects come back in the discussion section though.

Section 3: good example for a very clear method description. I think a list of the 26 catalogues as an appendix would be good.

How are the ‘majority dimensions’ you defined related (or not) to other maturity models? Maybe something for future research?

I think you capture the variety and also volatility of the current discourse beautifully, your description is rich and nuanced. Still, I was wondering if you should be not more brave and say (more pronounced) what your definition of a catalog is, even if it is only a working definition! and what about older/classic definition of a catalog? How much does your use of the term resonate with those? Another reviewer wrote, that you gave a definition, and so I scrutinised the text once more, but I could not really find your stance on it, maybe I overlooked it???

I like your discussion statements. I also recommend to look into: https://knoworg.org/the-dans-koso-observatory/ An exercise to classify KOS by also looking at their maintenance, history, size, …. Some of the important dimensions/features you find have been also tackled in this ‘annotation/classification’ exercise - even if the objects are different.

Minor issues

  • sec 3.1. Page 7: are search engine a repository, and the other way around?

  • sec 3.1. Could one visualise the 26 catalogs and their features?

  • sec 3.2. ‘describe the possible values to use…’ values meaning features?

  • 3.3. here features are introduced, need to come earlier; also what about ‘coding agreement/alignment’ - how different did the experts reviewed the catalogs?

Competing interests

The author declares that they have no competing interests.