Brief kernel of an idea:
- Societies deem certain ideas “dangerous”.
- If it possible to technologically eliminate perceived dangers, we can be tempted to do so, even when we perceived wrongly.
- Group-think has lead to catastrophic misjudgments.
- This represents a potential future “great filter” for the Fermi paradox. It does not apply to previous attempts at eliminating dissenting views, as they were social, not technological, in nature, and limited in geographical scope.
- This risk has not yet become practical, but we shouldn’t feel complacent just because brain-computer-interfaces are basic and indoctrinal viruses are fictional, as universal surveillance is sufficient and affordable, limited only by sufficiently advanced AI to assist human overseers (perfect AI not required).