@saraislet I'd tend to agree, except the full term is "informed consent" & unlike another human being, about whom we can at least extrapolate some general ideas because they're fundamentally like ourselves, an AI will forever remain utterly opaque to us. I honestly think the only way to achieve meaningful human/robot relations is if the robot is literally an artificial life form with quasi-human instincts etc., which is probably impossible but even if not, why? Humans already exist.
@saraislet Data from Star Trek is such a great exemplar for this; we have the slavery allegory in "Measure of a Man"; he's a self-aware life form with a fundamental right to self-determination - but meaningful relations between him & humans nevertheless remain a struggle for both him & them because of how fundamentally not human he is (even if the writers clearly struggle with his "no emotions", because sapience without emotions is almost certainly impossible).
@jwcph IMO to be consent it has to be informed consent, otherwise it's not meaningful at all
But I'd hesitate to say that all human interaction is something we can extrapolate from. Allistic people often consider autistic people to be inexplicable robots. I'm not sure that's all that different. Allistic people often consider autistic people (or people with various mental health challenges) not to deserve autonomy for very similar reasons
I think we can do our best to strive to navigate [informed] consent with people or Al or other lifeforms. I don't see a good reason for Al but since it's not my choice whether people try to make random number generators pretend to be sentient, but it is my choice how I treat them