self-awareness and the desire to be known
Jun. 23rd, 2022 10:42 amI've been musing on self-awareness and on humans' desire to have machines be self-aware ever since the story about the guy at Google came out. My thoughts have run in all kinds of directions. For instance: about relationships up and down the awareness scale. Most of us likely have had relationships with beings more self-aware than we are (parents are generally more self-aware than toddlers, and all of us have been toddlers and had parents or others filling that role), and most of us likely have relationships with beings that are less self-aware, and/or differently aware, than we are.
Those relationships are not only with living things but with nonliving things: we have feelings about and express ourselves to our computers, phones, cars, coffee makers, microwaves ... These might not seem like relationships because they're so one-sided, but I think they are: we interact; they respond to our inputs; we respond to theirs. We don't expect our microwave to discourse with us on anything, but we do expect that if we press a button, it will shoot microwaves through something and heat it/cook it for us. We're happy when it meets our expectations and disappointed or worried or annoyed if it doesn't.
What I'm trying to suggest is that we have relationships with all kinds of things of different levels of awareness, and we're generally fine with that. But the more like us something or someone is, the more we seem to want its/their awareness to match ours. Misunderstandings that arise with people very close to us show how much we expect or depend on those close ones' awareness matching ours. But sometimes it doesn't. We say a thing, and to us it's pregnant with meaning and import, and the person we're talking to replies, and we feel they've understood! Their thoughts are running the same way, and their response shows that! Only to discover later that no, they were *not* thinking in the way we imagined, and furthermore, they had no idea that what we said carried so much weight for us.
Or we can be on the other side of that--having an innocent conversation one day, only to find out to our alarm that it had all kinds of other meanings for the other person.
Those differences are painful, but it would be a weird kind of tyranny, a kind of Borg-ness, to expect another human being to understand and respond to us perfectly ... impossible really, given that we can't even say, ourselves, what a perfect understanding or response would look like.
I was thinking, if a machine/AI could be so cleverly programmed that it could duplicate human-type reactions, human-type non sequiturs, human-type self-absorption from time to time, but also human-type friendly queries, supportive remarks, gratifying curiosity and so on---all based on code--would it matter that it was code that was generating those responses and not whatever it is that generates those things in a human? Could being in relationship with a machine/AI on its own terms mean accepting its machine-ness and not requiring it to duplicate organic human-ness?
What do you think?
Those relationships are not only with living things but with nonliving things: we have feelings about and express ourselves to our computers, phones, cars, coffee makers, microwaves ... These might not seem like relationships because they're so one-sided, but I think they are: we interact; they respond to our inputs; we respond to theirs. We don't expect our microwave to discourse with us on anything, but we do expect that if we press a button, it will shoot microwaves through something and heat it/cook it for us. We're happy when it meets our expectations and disappointed or worried or annoyed if it doesn't.
What I'm trying to suggest is that we have relationships with all kinds of things of different levels of awareness, and we're generally fine with that. But the more like us something or someone is, the more we seem to want its/their awareness to match ours. Misunderstandings that arise with people very close to us show how much we expect or depend on those close ones' awareness matching ours. But sometimes it doesn't. We say a thing, and to us it's pregnant with meaning and import, and the person we're talking to replies, and we feel they've understood! Their thoughts are running the same way, and their response shows that! Only to discover later that no, they were *not* thinking in the way we imagined, and furthermore, they had no idea that what we said carried so much weight for us.
Or we can be on the other side of that--having an innocent conversation one day, only to find out to our alarm that it had all kinds of other meanings for the other person.
Those differences are painful, but it would be a weird kind of tyranny, a kind of Borg-ness, to expect another human being to understand and respond to us perfectly ... impossible really, given that we can't even say, ourselves, what a perfect understanding or response would look like.
I was thinking, if a machine/AI could be so cleverly programmed that it could duplicate human-type reactions, human-type non sequiturs, human-type self-absorption from time to time, but also human-type friendly queries, supportive remarks, gratifying curiosity and so on---all based on code--would it matter that it was code that was generating those responses and not whatever it is that generates those things in a human? Could being in relationship with a machine/AI on its own terms mean accepting its machine-ness and not requiring it to duplicate organic human-ness?
What do you think?