I’ve noticed the first time some people are exposed to the outcome of a SNOMED CT classification they regard it some form of black magic because they don’t like what it does and/or don’t understand why it’s done what it has done.
If they don’t like what it has done, it’s likely because either:
- They didn’t realise that the terminology was based on Description Logic (In which they need to be enlightened); or
- The classification has revealed defects in the modelling (which is a good thing!).
Otherwise the next step is explaining where the inferred properties come from. Which shouldn’t be particularly difficult.
The relationships in SNOMED CT, are essentially statements about the source concept. Taking a step away from medicine, consider these statements:
- A mammal is an animal
- A camel is a mammal
- A mammal produces milk
Them ask these questions. Is a camel an animal? Does a camel produce milk?
Hopefully you answer yes to both these. And so I say the process you just went through in your head, is how a classifier infers stuff. If the response is that the answers were obvious i.e. “Of course a camel is an animal”; it’s time to make the same statements but go totally abstract to avoid any perception pre-existing knowledge relating to the statements:
- A doodle is an widget
- A squibble is a doodle
- A doodle has colour foo
Now, the questions become: Is a squibble a widget? What colour is a squibble?
Through deductive reasoning most people should be able to answer these. These approach remind me of those riddles you’re asked to solve in primary school (retrospectively I assume these are to teach kids logic and reasoning), which apparently popularised by the Lewis Carrol (although syllogism has much earlier origins). Of course, this is not the entire story when it comes to classification – but rather than get bogged down in language used to explain description logic, explaining the process using such simple deductions helps unveil the haze of magic or even worse fear that the classifier is “making stuff up”.