A place for the community to ideate, discuss and debate the key ethics and principles that Opsci should embed into its solution and processes. We begin this discussion with the release of the first draft of our ethical manifesto — our “Purple Paper”! We invite the community to please read and provide feedback on this living document.
Here is a link to the paper: Purple Paper v1
Hi team, I had some notes on the Purple Paper above.
Preliminary Notes on Opscientia Neuroethics and Principles Purple Paper (V1)
Example problematic areas include
- Inappropriate monitoring, collection, storage and use of data
- The collective grouping, stratification or classification of clinical or (perhaps worse) unaware users of neurotech
- Unregulated ‘optimisation’ and modification of neural processes, and resultant clinical syndromes
- Clinical overreach of neurotech
- Use of these systems in a national defence capacity (hippocampal ai implant already being researched by darpa)
- Many more
Potential issues in clinical context
- Doctors afforded right to lifesaving care, and certain rights in non-competent persons. In a world with bci, this may be indicated in case of trauma, rapid neurodegeneration, neurovascular event etc. Informed consent and advanced care directives should be updated and extended past elderly or palliative patients.
4 – the right to equal access to mental augmentation
- Perhaps the use of the term ‘technological nootropic’ may be used here
- One of the first uses may be in the hippocampus, as it is a highly mapped area of the brain, and inputs/outputs well known. This may have implications for memory, sleep, emotion, seizures etc.
5 – the right to protection from algorithmic bias
- Perhaps a more practical note could be that there will always be algorithmic bias in human-created ai systems, and this will propagate through our neurotech. Instead, we should aim to have safety nets and controls and mechanisms for recognition and modification of these programs, in a manner that is resistant to coercive control, or bad actors.
- Also perhaps note that this bias is not always due to lack of diversity, but also due to stated and coded views on value and risk, and microadjustments to these values within systems may be required over time as functionality develops
- I believe there should be more written on the dangers of commercialisation of the brain. Technologies such as Neuralink – while exciting – are examples of private companies holding the data of private individuals, which increases the risk of data misuse, propagation and ongoing research efforts using this data without the informed consent of those using these technologies. Particularly in the case of clinical transplantation of a bci, we must consider that these patients may be incapable of proper informed consent and current structures around consent (next of kin etc) may not be sufficient to allow perpetual use of neural data of these persons.
- When devices are implanted clinically, we must be cognizant that the consent is marked with the risk of not having the implant. This creates an imbalance between patient consent and the consent that may be freely given by a well commercial user.
Please feel free to contact me regarding these comments via email - [email protected]