Sorry for not posting for a while, school, work, and laziness got in the way.
I've been thinking a lot about ethical issues from a rational perspective and I decided to get rid of all preconceived notions that I had about ethics and morals and to start anew.
My questions had more to do with metaethics. What system of ethics is rational? What differentiates a "good" action from a "bad" action? What is good and bad? Can the "goodness" and "badness" of an action be determined? Is the action itself more important or the outcome?
Well, at first, needless to say, I was a nihilist. I just couldn't think of anything that would justify any ethical system. Ethics are, after all, simply man-made. Sure, there are some "moral principles" ingrained into our mind (see the article posted somewhere in this forum on the topic) and the minds of other organisms, but there is no system of ethics that captures that.
In short, there is no system of ethics that has any rational basis for determining what is a positive and what is a negative action. Although utilitarians might argue that actions that cause pain are negative and actions that maximize pleasure are positive, this does not mean that pain/pleasure are accurate ways of measuring how ethical actions are. Why not flip the order? Why can't we say that actions that maximize pain are moral? There is no rational basis than "just because" or "to make people in this world the happiest they can be," etc.
Deontological systems also can't be considered rational. Why should certain actions be considered moral and other immoral? There is a point where these systems fail, simply because they fail to be rational under the persistence of an unbiased "third party" that is not attached to any morals.
Likewise, virtue ethics and other ethical systems have also failed since they cannot completely rationally explain why it is more ethical to improve yourself as a person, etc., etc.
At first, I resigned, threw up my hands in defeat, and thought that morals are irrational and simply a product of evolution and society trying to impose rules upon itself. But then I made a break through in my thought. Like many great (or what I hope to be great) ideas, it came somewhat unexpectedly, without much forethought, it just popped into my head one day.
Namely: "if all morals are simply man made, then the only rational system of ethics can be one that acknowledges that ethics can only exist based on agreements between persons."
In other words, since morals are not natural and are not rational, the only "moral" action is an action that you
decide is moral. And as such, the only "moral" guidelines between two or more people can be some kind of written, spoken, or even thought shared ethical system, similar to a social contract.
The basic idea is that if Person A agrees with Person B that actions X, Y, and Z are immoral and they will either not do those actions to anyone or they won't inflict those actions on each other.
This is basically what I call a "moral contract." Whereas social contracts usually involve government of some kind, moral contracts are simply an agreement between individuals and groups of individuals as to what actions they will consider good and bad.
Such a system acknowledges that all morals are simply human inventions, just as nihilism, moral relativism, subjectivism, etc. do, but then it builds upon it to create a universal system of ethics.