| |
Message 07: Fred
> Since we are the creators of this A.I., I believe that an A.I. would
> respect humanity.
Why? Just because? Because we designed it that way? If it found out that
we built in a moral restriction on harming humans, wouldn't it resent
that interference with its free will?
> If the A.I. was ever destroyed, a replacement A.I.
> could be built again by people. This "replacement" property I believe
> will guarantee the survival of the human species.
An A.I. that was significantly smarter than we are could design even
smarter A.I.'s that would have no particular moral attachment to humans.
> What will happen to the A.I.? I believe that once a smart enough A.I. is
> built, it will be sent to colonise other planets, and so on to
> colonise the rest of the entire Universe.
That's a whole other subject.
> I believe that sending A.I. probes is inherently better than sending
> humans into space because A.I. probes can be made sufficiently small
> (using nanotechnology) so that they can be sent at close to the speed
> of light, whereas sending humans is nowhere near as efficient.
> I might say rather controversially perhaps but an A.I. built by us will
> eventually become smart enough to be God. In my earlier messages I
> said that evolution would lead to God but I now believe it is A.I.
> research, rather than evolution that will lead to God.
You seem very attached to this "God" idea. So much so that you would have
the whole goal of evolution be to produce one.
|