Data Resilience: Openness
I covered the idea that data has a value that makes it more or less: core data versus metadata. That alone doesn’t determine everything about how flexible and portable the data is, especially with data that’s more at the meta end.
Another property is what I will call openness. There are two aspects to openness that will determine portability and future-proofing. These are readability and standardisation.
Readability
Readability is about how we can access our data in a usable form. How much processing or tooling is needed to use and display the data in some way, and how specialised are those tools? The example of Obsidian and Markdown files themselves have a high readability factor because independent tools can view them with little processing. This is especially readable since many devices and systems will have built-in tools to do that (e.g. Notepad on Windows could view your markdown files).
Other formats of data associated with Obsidian, such as Javascript, are less readable in their raw form. Specialised code editors or IDEs (integrated developer environments) help make them more readable. Making use of the data will require interpreters of some kind, making it less readable since it is now about the computer reading the data rather than the human as it is functional data.
Standardisation
The more standardised the data is, the more likely you will have various tools and methods for viewing and using the data. Markdown is a well known open standard (although some different flavours of the standard muddy it). Javascript is a standard, so although there is less readability because it needs more to interpret the data, it doesn’t require proprietary tools.
This post was created with Typeshare