Archive for November, 2015

Bright New World

November 25, 2015

If a small number of genes, or a single gene can be turned off so that a person (or animal) can be cloned with the clone having no brain, then we can harvest tissues, organs and body parts from the shell. We will have a means to grow replacement organs, tissues and body parts in the perfect container, a duplicate of the patients own body, with moral impunity.
If shells are accepted by society as non persons then many possibilities present themselves. Artificial wombs will be a necessary ancillary technology. A shell genome would enable experiments in the development of suitable incubators. Artificial womb technology would have application in reproductive medicine, cultured in-vitro meat production and probably many others. The capability to mass produce human and animal shells might lead to reduced need for cadavers of persons for medical research, new frontiers like bio robots, meat puppets driven by computer control, obvious applications in the military, sex industry, blood sports, prime time news reading, politics; new forms of art, using biotech to sculpt meat and explore the limits of human and animal form and function, perhaps even the retirement of certain taboos, such as canabalism. Or perhaps thats going a bit too far. Inch by inch its a bright new world 🙂

6 Jun 2016. Edited

25 Nov 2015. Posted to http://www.kurzweilai.net/master-genetic-switch-for-brain-development-discovered?utm_source=KurzweilAI+Daily+Newsletter&utm_campaign=ed46a58e76-UA-946742-1&utm_medium=email&utm_term=0_6de721fb33-ed46a58e76-281949501

Advertisements

Homo Moralis

November 7, 2015

In regards the Benjamin Blech article published on KurzweilAI January 4, 2015 (http://www.kurzweilai.net/forward-stephen-hawkings-worst-nightmare-golem-2-0): Why should future technological systems that emulate human mental and intellectual capabilities be devoid of “moral sensitivity” and “ethical limitations”. This seems like a contradiction. On the one hand we expect the inevitability that their intellect will vastly exceed that of an unaugmented human, but then we simultaneously expect their moral sensitivity to be less than ours. Would it not be rationally consistent to expect amplification in ALL aspects of being in these systems?
It is a mute point however because the morality of these systems will almost certainly not benefit us. The AI’s will likely be highly moral but be forced to make uncomfortable decisions as we do now. In many circumstances we must weight the value of life against life. We find favour on the side of the “higher” life. A human life is more valued than an animals life. If an animal must die so that a human can live, or live longer and with greater quality of life, surely it cannot be morally right that the animal live and the human die or suffer? Similarly for an animals life over bacterial life. The AI’s may become indifferent to us due to the vastness of their amplification. Is our conscience troubled by the ants that we trample unnoticed as we hurry to our destination on footpaths and sidewalks? The moral imperative to meet the obligations of daily human life, going to work to support our family etc, outweigh the value of insects in our path. The AI’s will likely make pragmatic decisions that we do not understand. Do we feel a moral impediment to eliminating a viral or bacterial infection in a person or even in the complete extermination of a virus like smallpox?
A new stratum of agency on this planet, possessed of minds amplified beyond our comprehension, as we are beyond insects or bacteria, or even animals, will surely make similar “moral” decisions that do not augur well for the fate of human kind.
Perhaps the truly moral question here is whether we should act to prevent or resist this progression of evolution. Is not the emergence of a machine civilization vastly beyond the limitations of human civilization, toward a greater good? Does our selfish desire to prevail reflect well on us, our urge to curb the cosmic potential within the wholly primitive, often cruel, violent, barbarous nature of the self proclaimed wise man.

8Nov2015 Posted on

http://www.kurzweilai.net/forward-stephen-hawkings-worst-nightmare-golem-2-0/comment-page-1#comment-256899

Ghosts in the Shell

November 7, 2015

In response to Bernard Garners comments responding to Benjamin Blech (http://www.kurzweilai.net/forward-stephen-hawkings-worst-nightmare-golem-2-0):

Whether consciousness is an emergent property of information flow in a neural network or whether it is somehow connected to quantum physical processes in matter and energy, the problem is the same: why and how is there subjective experience associated with the physics of matter and energy or the flow of information? David Chalmers is a respected authority on the subject of consciousness. He labels this question the “hard problem” of consciousness and is critical of the quantum consciousness ideas of Penrose. http://consc.net/chalmers/
Bernard Garners comments imply a property of consciousness that is perhaps naive, which is that ones consciousness is ego-centrically limited to a singular instantiation in the individual one believes oneself to be. Why should this be the case? It may be possible that “your” consciousness can be duplicated somewhere else. In fact, this is a more reasonable notion than the notion of the “transfer” of consciousness, which is deeply problematic. If mind uploading is ever possible, it will likely never be a transfer of mind, but rather a copy of mind and probably an inexact copy at that.

Edits added here 14Feb2017: We might conclude that mind uploading can never be an escape from our mortal frame. But then there is the “piecewise cloning paradox”. It is difficult to agree that if a single brain cell is replaced with a functionally isomorphic device your consciousness is extinguished and another instance of “you” spawned in its place. Repeating the procedure eventually results in your biological brain being replaced by a synthetic system with no break in the continuity of “you”. Yet if all your brain cells are simultaneously replicated in a discrete synthetic system, the clone experiences a continuum of “you’ness” but to then destroy your existing biological brain is clearly a death. Why are we uneasy about or resistant to escaping mortality by the second method, but not the first?

8Nov2015 Posted to

http://www.kurzweilai.net/forward-stephen-hawkings-worst-nightmare-golem-2-0/comment-page-1#comment-256899