Technology neverending, is that ethical?

  1. Philosophy Forum
  2. » Ethics
  3. » Technology neverending, is that ethical?

Get Email Updates Email this Topic Print this Page

Reply Thu 19 Jun, 2008 10:47 pm
I can see technology getting very uncontrollable. The exponential growth of transistors, nanotechnology, just imagine the military applications, mind controlled robots, cloning, nuclear warfare becoming more global, etc.

Eventually I can just see a point when the rate of finding the morality of such technology will be so slow compared to the rate of technological growth that people would turn evil, because all doctrines would be obsolete.
I have no question I just wanted to talk about it.Very Happy
 
Zetherin
 
Reply Thu 19 Jun, 2008 11:41 pm
@Holiday20310401,
It's definitely something interesting to think about. Should the technology singularity (Technological singularity - Wikipedia, the free encyclopedia) arise, computers will be at a level in which they can self-repair, beyond human intelligence. If this happens there's no telling what can actually occur.

This I found interesting, from the article I just posted:

"Berglas (2008) argues that unlike man, a computer based intelligence is not tied to any particular body, which would give it a radically different world view. In particular, a software intelligence would essentially be immortal and so have no need to produce independent children that live on after it dies. It would thus have no evolutionary need for love. But it would have an evolutionary need for power because the first AI that wants to and can dominate the earth will dominate the earth."

I'm just hoping I have my own personal Arnold from the future to protect me because there's a good chance this may occur in my lifetime Smile

I completely agree that we must start taking all this into consideration, and try to apply some type of preemptive measure. However, technology is worldwide, and morality so varied, I highly doubt we could all come to a universal common ground - unless we had to (which would imply we're too late, as we're being destroyed by machines).
 
Didymos Thomas
 
Reply Fri 20 Jun, 2008 05:02 pm
@Zetherin,
I think we're already at that point.

We can destroy the entire planet in a matter of minutes if we detonate enough nuclear warheads. Humans should not have this sort of authority.
 
Zetherin
 
Reply Fri 20 Jun, 2008 05:05 pm
@Didymos Thomas,
Didymos Thomas wrote:
I think we're already at that point.

We can destroy the entire planet in a matter of minutes if we detonate enough nuclear warheads. Humans should not have this sort of authority.


No, we'd be at that point if the nuclear warheads chose to go off BY THEMSELVES Wink
 
GoshisDead
 
Reply Fri 20 Jun, 2008 05:07 pm
@Zetherin,
This has been a theme in Popular literature for millenia. Technology has no morals, it just is. Morality lies in the use of technology.
 
Didymos Thomas
 
Reply Fri 20 Jun, 2008 05:14 pm
@GoshisDead,
Quote:
No, we'd be at that point if the nuclear warheads chose to go off BY THEMSELVES


Well, they can. Mutually assured self destruction was a real policy, and it is conceivable that humans devise automatic responses for warheads. We have the technology - I can program an alarm on my cell phone, why not automatic detonation under certain conditions for warheads?
 
Zetherin
 
Reply Fri 20 Jun, 2008 05:16 pm
@Didymos Thomas,
Didymos Thomas wrote:
Well, they can. Mutually assured self destruction was a real policy, and it is conceivable that humans devise automatic responses for warheads. We have the technology - I can program an alarm on my cell phone, why not automatic detonation under certain conditions for warheads?


No, I'm talking about artificial intelligence. It wouldn't be us programming the warheads to go off, the machine would be CHOOSING ON IT'S OWN whenever it wants, with it's own conscious. Imagine if you programmed your alarm on your cell phone for 9:30am and it just chose to go off at 2:30pm just to **** you over.

If we were at that point, we'd most likely already be dead.
 
Didymos Thomas
 
Reply Fri 20 Jun, 2008 05:29 pm
@Zetherin,
Self aware AI is terrifying. I guess my point is that I see no reason why we have to get that far before our pursuit of technology is unethical.
 
Zetherin
 
Reply Fri 20 Jun, 2008 05:35 pm
@Didymos Thomas,
Didymos Thomas wrote:
Self aware AI is terrifying. I guess my point is that I see no reason why we have to get that far before our pursuit of technology is unethical.


Oh, no, I completely agree we should be discussing the ethics of technology in order to form some kind of preemptive strike, or plan, when this does occur. However, my point was, judging by the variation of morality all across the world, it may be difficult for all of us to come to a common decision. After all, it does us no good if the United States and Japan agree on something, and 5 years later, a breathe of AI is developed in Russia. It almost seems inevitable there will be destruction.
 
Didymos Thomas
 
Reply Fri 20 Jun, 2008 05:39 pm
@Zetherin,
Same problem with nuclear disarmament.

But what's the alternative? We can either keep the course ourself, leading to certain destruction, or we can stop our pursuits and hope that other nations will follow our lead.

If we keep at it, destruction is inevitable. But if we stop our 'progress', then the chance of avoiding this annihilation must drop, at least to some degree.
 
Zetherin
 
Reply Fri 20 Jun, 2008 06:14 pm
@Didymos Thomas,
Didymos Thomas wrote:
Same problem with nuclear disarmament.

But what's the alternative? We can either keep the course ourself, leading to certain destruction, or we can stop our pursuits and hope that other nations will follow our lead.

If we keep at it, destruction is inevitable. But if we stop our 'progress', then the chance of avoiding this annihilation must drop, at least to some degree.


I highly doubt we could ever get the entire world to halt progress on technology.
 
Didymos Thomas
 
Reply Fri 20 Jun, 2008 06:20 pm
@Zetherin,
Me, too. But so what?

Should we persist in error because others persist in error? I don't think so.
 
Holiday20310401
 
Reply Fri 20 Jun, 2008 06:20 pm
@Zetherin,
When's the last time that the whole world has come together for a uniform cause anyways? What would bring about that?
Would some sort of new kind of religion that could be for the sake of morality and not for power be possible. A religion founded by the people for the society, and not for the sake of governing.
 
Zetherin
 
Reply Fri 20 Jun, 2008 06:30 pm
@Holiday20310401,
Holiday20310401 wrote:
When's the last time that the whole world has come together for a uniform cause anyways? What would bring about that?
Would some sort of new kind of religion that could be for the sake of morality and not for power be possible. A religion founded by the people for the society, and not for the sake of governing.


The only thing that would bring that would be our survival as a species. That's the common denominator between every culture - it wants to ****ing survive. So, if our existence really were threatened, I'd hope that we could put our differences aside and work together. Though, I don't really even have faith in that.
 
Holiday20310401
 
Reply Fri 20 Jun, 2008 06:35 pm
@Zetherin,
So a way to hasten our morality to become as quickly evolving as technology is gaining would be to stop war, because that would stop the threat to survive within ourselves.
 
Zetherin
 
Reply Fri 20 Jun, 2008 06:41 pm
@Holiday20310401,
Holiday20310401 wrote:
So a way to hasten our morality to become as quickly evolving as technology is gaining would be to stop war, because that would stop the threat to survive within ourselves.


Well, not quite.

There are 3 levels of threat for humanity.
1.) Biological
2.) Ideas by us (this encompasses everything, from our ideals of justice that create war, hate, love, whatever...basically just anything that has the potential to destroy us as a species that is the heart of us humans)
3.) Technology

Number 1 hasn't killed us yet, and doesn't look like it will actually wipe us off the earth entirely anytime soon (however, many people do die a year from diseases such as HIV). Number 2 definitely still has the potential to kill us, and will be a result of us arming the nuclear warheads, as Didy proposed (among other things, like chemical warfare). As for number 3, the singularity hasn't occurred yet, but if and when it does, it may be our biggest threat.
 
Holiday20310401
 
Reply Fri 20 Jun, 2008 06:46 pm
@Zetherin,
Is technology not the same as number three,an outcome of many ideas turned into inventions?
 
Zetherin
 
Reply Fri 20 Jun, 2008 06:56 pm
@Holiday20310401,
Holiday20310401 wrote:
Is technology not the same as number three,an outcome of many ideas turned into inventions?


There's much debate over this, but the rationale that differentiates the two is that with number 3, even though it is an outcome of number 2, it still lends as being a completely separate threat since it will be conscious in a sense. It will be similar to number 1, as in, technology will have no morals, it will destroy just as biological viruses destroy with absolutely no remorse. And because of this, it won't have anything to do with number 2 anymore; it will have a conscious of it's own that threatens humanity.
 
Holiday20310401
 
Reply Fri 20 Jun, 2008 07:01 pm
@Zetherin,
So a difference between humans and machines is our conscience is like right vs. wrong, with emotions to give it pizazz. lol. With machines, morality is a non existent attribute, given that their decisions are based upon true and false.
Then again, perhaps our emotions are governed by reactions to right and wrong interpretations. A machine could have emotions that come from true and false and would have an illusive sense of being 'artificial'....

What's your definition of conscience? Because what would you call somebody if their actions were governed not by the sense of right and wrong, say that sense was deprived, what would a person turn to? Emotion?, lets say that emotion was not available, what else would determine actions.
 
Zetherin
 
Reply Fri 20 Jun, 2008 07:09 pm
@Holiday20310401,
Holiday20310401 wrote:
So a difference between humans and machines is our conscience is like right vs. wrong, with emotions to give it pizazz. lol. With machines, morality is a non existent attribute, given that their decisions are based upon true and false.
Then again, perhaps our emotions are governed by reactions to right and wrong interpretations. A machine could have emotions that come from true and false and would have an illusive sense of being 'artificial'.


Well, though it sounds absolutely bizarre, it's possible machines may even develop intelligence so advanced it may actually understand our reasoning for morality, and essentially have emotions of their own. In the next decade we will have the technology to scan the entire brain, and perhaps delve into some of the reasons why we actually feel and what in the brain causes these responses. There's no reason I see why technology won't be able to apply some of this. Sure, we like to think that we are special and machines would never have a conscience, but I wouldn't bet that's the case.
 
 

 
  1. Philosophy Forum
  2. » Ethics
  3. » Technology neverending, is that ethical?
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.04 seconds on 04/19/2024 at 09:20:02