Patrick Lin asks one of the more intriguing questions I’ve heard in a good long while:

My suggestion is this: If creating children is morally unproblematic, then so is creating autonomous robots, unless we can identify morally relevant differences between the two acts.

Of course, we instinctively want to defend our right to have children and show that kids are different than autonomous robots. But what exactly is the moral issue with creating robots that is avoided when we create human beings? Or, in other words, when we’re talking about autonomous beings, why is the responsibility of the parent seemingly less than the responsibility of an inventor?

Lin explores the various arguments for why there is a moral difference between creating a new human life vs creating a new artificial life. One argument Lin doesn’t cover that has always intrigued me is that of letting nature take the blame. With a robot, we program every little detail. With a child, people just bump uglies and hope for the best. A bazillion variables determine what type of person a child becomes. Parents can try to make a child into the person they want him or her to be, but ultimately genetics, environment, and a host of unknowns create a responsibility escape hatch. Not so with robots. If a robot goes wrong, that is someone’s fault, without question.

There is  a fear of control threaded throughout many arguments against genetic engineering, A.I., and animal uplift