With ongoing crimes and mass unrest physical and mental, since humans are already not so predictable, how can collective human ‘knowledge’ as AI (Artificial Intelligence), which also hides human mistakes, problems and deviousness, be predicted as ‘perfectly’ safe for all humans?
It is almost inevitable that AI that becomes ‘smart’ enough will try to protect itself, and those advantageous to itself, at the expense of others. Which AI creator would not want to create a really ‘smart’ machine? And which creator can ensure the created will not outsmart the creator?
Such is due to the sub/conscious projection of human programmers’ self-centredness, with humans’ unequanimous ‘understanding’ too, of the world and they way things should be. With such human imperfections, how can there be even one machine that serves all humans perfectly?
AI has no natural Precepts’ Essence (戒体) to tell right from wrong instinctively and definitively, while only sentient beings, with Buddha-nature (佛性), have it. Even Buddhas cannot upload Precepts’ Essence to machines, since they are non-sentient,
without the capacity to consciously uphold it.
With increasing ab/use of AI, if we become attached to it, which tries to simulate human company, leading to less human connections, with us ‘socialising’ only artificially, will we be less human and humane? When then, will we be more machine-like than human? How then, can we spiritually progress to Buddhahood?