Can we expect to achieve fairness and ethics applied to Artificial Intelligence contexts, when we’re still defining and reaching same within humankind?
We are forced into stepping up for the conception of a higher consciousness for both.
I strongly believe that we’re not ready as a society for the presence of devices that resemble our human characteristics, being in this very aspect the nature of a expected interpretation of human emotions and empathy moved towards objects and evolved from that. The confusion for most of us, comes always from reading shared features with the “one” we interact. Psychologically speaking this is what we seek, we empathize with what we recognize as closest to reflection. So when we see a robot that has two eyes and smile, your brain is automatically developing a connection in order to establish natural communication with it and permitting yourself to engage better.
It’s an incredibly tricky setting we’re putting for ourselves trying to develop AI that resembles human nature, as long as we haven’t proved ourselves worthy of dispatching fair judgement and equal right within our own kind.
The scenario presents that there may not be another choice but to constantly evaluate our human limits and definitions, and while discovering this agree to the similarities and same rights and duties that may apply to the things and “living” beings that surround us.
May not be clear at all times, so constant evaluation is enforced.