Discussion of standard for marking explicit tags (e.g. NSFW) on a post so that frontends can appropriately auto-hide images of such posts.
This is to start off a discussion of how to handle posts with NSFW (and other explicit) content. The goal is to not auto-show images that the user presumably would not want automatically displayed on their computer while browsing Steem, either because they are inappropriate for the context in which they are using their computer (e.g. work) or they just don't want to see them (especially true for NSFL content), but yet still not make the user experience annoying by requiring all posts to be click-to-reveal. As usual it works on the principle of posters self-reporting the appropriate tags that the frontends could then act on, and using community downvoting power to enforce accurate self-reporting.
This is what I posted in the #proposals channel of the Steem slack:
A post/comment may have an "explicit" field in the JSON with an array of strings as its value. If a post/comment has an "nsfw" string in that array, it is considered NSFW and there would always be a visible NSFW tag displayed next to the post/comment.
If a post is marked NSFW, it will not show pictures in that post in any recent/trending view other than the ones of categories whose name is on a client-side list of NSFW categories. However, even then, the pictures will not be shown in that recent/trending view if the user has set the "hide_nsfw_pictures" option in their settings.
Clicking on an NSFW post to see the full detail will show the pictures in the post as long as the user has not set the "hide_nsfw_pictures" option in their settings; if they have, it will replace the image with a placeholder box saying "Possible NSFW content" which can be clicked to reveal the image. An NSFW comment will by default hide the images in the comment with placeholders (again clicking on the placeholder box reveals the image) in a discussion thread unless the top-level post parent in the discussion thread is marked NSFW and the user has not set the "hide_nsfw_pictures" option in their settings.
When submitting a post or comment, there would be a checkbox to indicate whether the post/comment is NSFW. If this checkbox is checked, the "nsfw" string will be included in the array value of the "explicit" field within the JSON of the post/comment. Normally, it is by default unchecked (meaning it is not NSFW). However, if this is a comment in a discussion thread where the top-level post parent is marked NSFW, or if this is a post in a category whose name either has the word "nsfw" in it or is on a client-side list of NSFW categories, then the default state of the checkbox will be checked (meaning it is NSFW).
Again, this is just my initial ideas that we can use as a starting point for a discussion to work out a more formal standard.