Thomas the Tank Engine

  1. Philosophy Forum
  2. » General Discussion
  3. » Thomas the Tank Engine

Get Email Updates Email this Topic Print this Page

Reply Tue 26 Feb, 2008 06:38 pm
In America (and possibly else where) we have a childrens show called "Thomas and Friends". Thomas and his friends are antrhopomorphic Train Engines that talk, have feelings (like guilt and envy), get frieghtend, and occasionally use reasoning skills.

As I was watching this show with my son, this quesiton popped into my head: "If it were possible to make trains like Thomas, would it be worth the effort?"

Is there any possible benifit to giving our transportation vehicles consciousness?

And if we did happen to perform this feat, would we be morally obligated to these 'beings'? What level of obligation would we have, like we have for fellow humans, or like those of animals?
 
Edvin
 
Reply Wed 27 Feb, 2008 12:30 pm
@de Silentio,
So. is what you're asking if AI should be treated as real (human) life. Or if veichles with some sort of human "soul" inhabiting them should be treated as human beings? In either case my answer is yes, provided that the living veichle is not behaving anything like that older, meaner train that is always busting Thomas' chops. Then my answer would be no. Definetly no.
 
Vasska
 
Reply Fri 29 Feb, 2008 04:25 pm
@Edvin,
I grew up watching that show from time to time. But other than that i don't think anything that we can manufacture (except from clones) can even be considered human for it will always lack a lot of things. The reason why many things are anthropomorphic is because we like to watch it, it is fun and it does no harm. It also makes things more exciting, gives a twist to the stories and used in proper context can even be more effective then using real people. (Like Disney movies)

Back to your question: Why should we make talking lamps and give everything artificial life? I think we should be focusing on the quality of our lives. The lives that we live today can be of such higher quality that we might even evolve to the Ubermensch (superman for the English readers) Nietzsche talked about. Maybe when we are at that stage we can start creating new life that might even be partly mechanical like Thomas and Friends.
 
de Silentio
 
Reply Sat 1 Mar, 2008 01:28 pm
@de Silentio,
Quote:
Edvin - So. is what you're asking if AI should be treated as real (human) life. Or if veichles with some sort of human "soul" inhabiting them should be treated as human beings?


If something is intellegent, what makes it artificially so. I would not classify these engine as artificial intellegence. When I think of AI, I think of a computer program that is programmed in such a way that it can randomly act in a predetermined manner depending on input. However, these engines are more like humans.

I like how you put 'soul' in quotes. When I've done presentations to High School students about Plato's virtue ethics, I had to mention how he divides the 'soul' into three parts. Now, before I turned everyone who was not 'of faith', as I will put it, I had to say: "If you don't believe that we have a soul, think of the soul as a person character, or personality, it is what makes the person who they are".

Now, if an engine thinks, acts, and 'feels' (emotionally and physically) as we do, what makes them less human than us? Do they still have a 'soul' like we do?
 
de Silentio
 
Reply Sat 1 Mar, 2008 01:42 pm
@de Silentio,
Quote:
Vasska - But other than that i don't think anything that we can manufacture (except from clones) can even be considered human for it will always lack a lot of things


Of course we can't. This question is more like a thought experiment, but I wouldn't call it that because it is not that well thought out. It is more like a thought game, just for fun.

Quote:
I think we should be focusing on the quality of our lives. The lives that we live today can be of such higher quality that we might even evolve to the Ubermensch


Asking questions like I asked is focusing on the quality of my life. It gives me great pleasure to do philosophy, even if it is with meaningless questions.

Now, if you are talking that we should focus our technology towards the qaulity of our lives rather than creating machines that think as we do, sure your right, but that is not the point of my post. It is just a fun game to play that enhances my pleasure (and I would hope others)

As for the Ubermensch, I don't think man will ever be able to perform the feat of creating intrinsic, objective value. Yes, on a personal level we create value in our own lives, but that value stems from the percieved value that I have experienced others as having. Turning into ourselves to find value in the world outside of us is an idea that discounts experience. I cannot create my own value, because no matter how hard I try, those values will always be influenced by my conditioning.
 
Didymos Thomas
 
Reply Sat 1 Mar, 2008 02:05 pm
@de Silentio,
So, basically, you ask what obligations we have to machines that can think?

Whether metal or flesh, if it thinks I see no reason to differentiate between them when we speak of moral obligations.
 
Vasska
 
Reply Sat 1 Mar, 2008 02:11 pm
@de Silentio,
de Silentio wrote:
If something is intellegent, what makes it artificially so. I would not classify these engine as artificial intellegence. When I think of AI, I think of a computer program that is programmed in such a way that it can randomly act in a predetermined manner depending on input. However, these engines are more like humans.

I like how you put 'soul' in quotes. When I've done presentations to High School students about Plato's virtue ethics, I had to mention how he divides the 'soul' into three parts. Now, before I turned everyone who was not 'of faith', as I will put it, I had to say: "If you don't believe that we have a soul, think of the soul as a person character, or personality, it is what makes the person who they are".

Now, if an engine thinks, acts, and 'feels' (emotionally and physically) as we do, what makes them less human than us? Do they still have a 'soul' like we do?


Nothing personal or anything but you consistently wrote intelligence wrong.

You consider AI as "a computer program that is programmed in such a way that it can randomly act in a predetermined manner depending on input"
however if you look at our brains aren't they programmed too? Maybe in a different way, and still beyond our grasp. But they are programmed. The new generations of robots often are equipped with basic (human) character. However these robots can always be controlled, and unless they can reprogram themselves to have the same characteristics as us humans or even characteristics that are above ours. Taken in the first scenario we cannot consider them human. But we are free to consider them equal to us.

Thomas and Friends are fictional and therefor cannot be considered anything other than fiction. However if they are real we might want to consider them equal to us, but not human for they still lack many characteristics.

Plato's definition about the soul sure is interesting and I will read more about it as soon as possible. To give my opinion about it now, without even knowing much about it would be foolish. If you wish to explain Plato's definition of the soul more clearly please do.

de Silentio wrote:
Of course we can't. This question is more like a thought experiment, but I wouldn't call it that because it is not that well thought out. It is more like a thought game, just for fun.

Asking questions like I asked is focusing on the quality of my life. It gives me great pleasure to do philosophy, even if it is with meaningless questions.

Now, if you are talking that we should focus our technology towards the qaulity of our lives rather than creating machines that think as we do, sure your right, but that is not the point of my post. It is just a fun game to play that enhances my pleasure (and I would hope others)

As for the Ubermensch, I don't think man will ever be able to perform the feat of creating intrinsic, objective value. Yes, on a personal level we create value in our own lives, but that value stems from the percieved value that I have experienced others as having. Turning into ourselves to find value in the world outside of us is an idea that discounts experience. I cannot create my own value, because no matter how hard I try, those values will always be influenced by my conditioning.


You are right about the Ubermensch being a personal wish and reality for us. Damn, i was wishing so hard the Ubermensch would be a reality for everyone Sad
 
de Silentio
 
Reply Sat 1 Mar, 2008 02:26 pm
@de Silentio,
Quote:
Nothing personal or anything but you consistently wrote intelligence wrong.


Sorry, I hate misspellings. I used to type my posts in word then copy them over, but there are always formatting issues and it leads to a lot of extra work, so I stopped doing that. Again, I apologize.

Quote:
but not human for they still lack many characteristics.


Can you please expound on which characteristics make you human.

Quote:
Plato's definition about the soul sure is interesting and I will read more about it as soon as possible.


I don't think that is Plato's definition of the soul, it is just what I say because I don't want to turn people off when I speak about the soul.

I would suggest reading Plato's Phaedo. He talks a lot about the soul in that book, and it is reletively short. Also, you can look to the first 3 books of the Republic.

I will try to find time to make a post about Plato's soul in the Plato section. (no guarantees, it has been a while since I've focused on such matters)
 
Vasska
 
Reply Sat 1 Mar, 2008 02:45 pm
@de Silentio,
de Silentio wrote:
Sorry, I hate misspellings. I used to type my posts in word then copy them over, but there are always formatting issues and it leads to a lot of extra work, so I stopped doing that. Again, I apologize.


You can install the firefox browser (Firefox) and it should already have the dictionary build in (Version 2 and above). All you have to do is go to an input field (so hit reply on any topic), right click with your mouse in the input field and select "Spell check this field". Words you mistyped will be underlined in red and can be "fixed" by rightclicking on the word (just like in word). It saves me a lot of trouble.

de Silentio wrote:

Can you please expound on which characteristics make you human.

Characteristics from humans for me is basically everything that defines human behavior from emotions to body language to how we move, think and do everything.


de Silentio wrote:

I don't think that is Plato's definition of the soul, it is just what I say because I don't want to turn people off when I speak about the soul.

I would suggest reading Plato's Phaedo. He talks a lot about the soul in that book, and it is reletively short. Also, you can look to the first 3 books of the Republic.

I will try to find time to make a post about Plato's soul in the Plato section. (no guarantees, it has been a while since I've focused on such matters)



Thank you. I think I'm gonna buy the complete works of Plato for I probably end up with all his books anyway. To buy the complete works in one book is probably cheaper.
 
de Silentio
 
Reply Sat 1 Mar, 2008 02:55 pm
@de Silentio,
Quote:
Characteristics from humans for me is basically everything that defines human behavior from emotions to body language to how we move, think and do everything.
Would you consider Isaac Asimov's robots human (If you have not read his book, maybe you have seen the movie "I, Robot")

Quote:
To buy the complete works in one book is probably cheaper.


Make sure they are good translations. I like the Benjamin Jowett translations myself. Barnes and noble has a 'Great Dialogues' book that is only missing a few and 'The Republic' which is a seperate book. (they are the 'Barnes and Noble Classics' edition) Together they cost roughly $18. The are regular book sizes, which I prefer. For the dialogues they are missing, you can always pick them up individually. Just a thought.
 
Vasska
 
Reply Sat 1 Mar, 2008 03:20 pm
@de Silentio,
de Silentio wrote:
Would you consider Isaac Asimov's robots human (If you have not read his book, maybe you have seen the movie "I, Robot")

I think that the robot as it was seen in I, Robot can be equal to human. However robots must obey these laws, and in this context will always seen as highly advanced servants and nothing more.
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I still need to read his Robots series. My reading list grows bigger and bigger every day.

de Silentio wrote:

Make sure they are good translations. I like the Benjamin Jowett translations myself. Barnes and noble has a 'Great Dialogues' book that is only missing a few and 'The Republic' which is a seperate book. (they are the 'Barnes and Noble Classics' edition) Together they cost roughly $18. The are regular book sizes, which I prefer. For the dialogues they are missing, you can always pick them up individually. Just a thought.


I was thinking about buying thisbook from Hackett Publishing. It costs around EUR62 ($94). And as far as i can see has everything written by Plato (around 1800 pages).
 
Whoever
 
Reply Mon 22 Jun, 2009 08:35 am
@de Silentio,
If we can distinguish a human being from an ape then it shouldn't be hard to distiguish one from a train, regardless of whether either is conscious.

If Asimov's robots are human beings then why do we insist on calling them robots?

If the question is should we respect sentient beings and do them no harm, then I'd say yes, whether it's a human being, a flea or Thomas the Tank Engine.

He was born in Wales, by the way, and later emigrated.
 
 

 
  1. Philosophy Forum
  2. » General Discussion
  3. » Thomas the Tank Engine
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.05 seconds on 11/15/2024 at 08:21:11