https://www.jamesrmeyer.com/otherstuff/easy-footnotes-for-web-pages.html#Fn_12_Codea

Saturday, April 22, 2023

Testing consciousness with simulation: A moral and civil law framework

Indium phosphide nanocrystalline surface obtained by electrochemical etching. 
A featured picture, and Picture of the Week for the Czech Wikipedia for the 9th week of 2020.

If I didn't know better, I would think those are flowers. With a remote possibility for consciousness.
But because I do, I believe that there is no consciousness depicted in the image. 

Quoting a paper on AI from 1964, almost 60 years ago,

I have referred to this problem as the problem of the "civil rights of robots" because that is what it may become, and much faster than any of us now expect. Given the ever-accelerating rate of both technological and social change, it is entirely possible that robots will one day exist, and argue "we are alive; we are conscious!" In that event, what are today only philosophical prejudices of a traditional anthropocentric and mentalistic kind would all too likely develop into conservative political attitudes.⁠ (Footnote: Putman, H. (1964). Robots: Machines or Artificially Created Life? The Journal of Philosophy, 61(21), 668–691. https://doi.org/10.2307/2023045)

This reasoning could not be more relevant today given the developments towards artificial general intelligence (AGI). The current mainstream line of reasoning I have encountered on the civil rights of AI is (1) that current AI is not conscious, but might be "soon", and (2) because current AI is not conscious, it is undeserving of civil rights in any capacity. I believe a critical perspective of these two assumptions, implicitly referring to the hard problem of consciousness, will lead to a new paradigm of AI ethics which takes the concept of civil rights for AI seriously.

The critical problem with claim (1) is that there is no widely agreed upon test for consciousness, so this statement can only be true by assuming it as an axiom; it is more accurate but far less popular, from this position of naivety, to determine "I don't know if AI is conscious". In an effort to resolve this question, I propose a consciousness test of my own with several unique properties not exhibited together by any previous consciousness test in literature: 

  • The test is relative: the answer to the test can change relative to both the observer and the entity. 
  • The test is universal: it can be applied unilaterally without any modification to humans, nonhuman animals and AIs, in such a way that it recognizes the vast majority of humans as conscious. In other words, the test is claimed to be both necessary and sufficient across all testable entity-observer pairs.
  • The test emphasises its own subjectivity: determining consciousness is a moral process which can change for a given observer over time purely through a shift in interpretation.
  • The test aligns with existing beliefs: if an observer views a certain entity as conscious, they are virtually guaranteed to pass the proposed test (ie. low false negative rate). If not, the test would be failed (a low false positive rate).

Claim (2) also exhibits faulty reasoning of its own, which I also devote some time to dismiss using evidence that civil rights are granted on principles of both meritocracy and stigmatization.