Let’s consider a blacksmith, one who makes hammers. He creates a pretty straightforward product, with an obvious use in mind, usually bereft of any moral complications. He creates the hammer, people acquire it and build stuff with it. If someone uses the hammer to place the nails that keep a house upstanding, that person would not share the success or failure of the task with the blacksmith.
The interesting thing with that example is that we can start adding complexity, in an attempt to get towards morally muddy waters. For instance, what is the responsibility of the blacksmith if someone uses the hammer to murder someone? What if for some reason that becomes the dominant use of the hammer? Should the blacksmith consider himself morally complicit and seize production of hammers? In that case, what about the impact to the intended users of the product?
It is easy to draw parallels between the blacksmith and contemporary engineers. Of course there are differences, but I find that most dilemmas that techno crafters today face essentially boil down to something like the blacksmith’s conundrum. When someone takes the time to actually think about them, there are two questions that dominate most lines of thought around the issue - should someone claim or be assigned responsibility for the use her creations are put to, and what can she do to prevent usage she doesn’t agree with?
These questions aren’t new. They are present in the minds of anyone who builds tools for others to use. Being a software engineer and constantly thinking about this, i recently realized a thing i’ve been doing wrong and which seems to be a significant obstacle towards deciding on such questions.
You see, engineers tend to have very analytical minds. Most of humanity does, but the engineering craft attracts people who are somewhat good at it, that tendency to abstract a bunch of special cases to a generic ruleset so you can act with consistency.
I think that is the problem, right there. Rules.
Humanity has been trying to solve ethics for a very long time. Most frameworks that have been proposed try to provide an analytical answer, a set of rules which, when applied, can determine if an act is ethically sound or not. Engineers’ tendency to think in such straight lines naturally predisposes them to follow that same reasoning and eventually hit a wall build of edge cases and hypotheticals not unlike the ones I used to complicate the blacksmith’s conundrum.
“If I decide not to sell my hammers to convicted felons, how can truly reformed ex-criminals find a living building stuff?”
“If I decide to sell to all those legally allowed to own hammers, how can I avoid engaging with people who’s building standards I consider sub par? Can I correctly judge their craftsmanship?”
“If I don’t sell to someone, they will get the hammer from someone else anyway. I might as well make a profit out of it.”
These sorts of questions pop up all the time and are usually considered the end to the discussion about moral responsibility, since they accept no answer. No analytical answer, that is. An obvious shortcoming of these questions is that the hypothesis assumes a hard, analytical rule in place, while the resolution contains a fuzzy, emotional dilemma that rises out of applying the rule. If you think about it, it doesn’t really make sense. You start the question by stating that you have a hard rule on deciding ethical choices and you immediately accept that there are cases where it is insufficient, based on an emotional, non quantifiable response.
There are, of course, two ways one can solve this.
One is to stop pitting emotional dilemmas against analytical frameworks. After a ruleset has been decided, it must be acted upon diligently and any emotional objections to the result must be considered within the broader principles the rules were derived from, assessed and potentially assimilated. There is a name for that - we call it law. Common law is a set of rules that tries to capture what society feels is “right” to do or not. Once that feeling is encoded, it is not open to interpretation. There are no exceptions, otherwise you might as well not have the rule in place and judge on a per case basis.
The other way to deal with moral questions is to get rid of the analytical part altogether. That means that the blacksmith will replace any analytical framework for a case by case evaluation on the morality of a transaction. It means, essentially, that the blacksmith will use his emotion in place of his logic when he has to decide who gets to use his hammers.
Sounds arbitrary? Abstract? Impractical or unfair perhaps? I disagree. I think it’s as fair as any ruleset and potentially more accurate.
But this post is already too big for my taste. Since I have exposed my thesis and a nice parable to draw from, I will leave you to think about it - I will too. I will follow up with some posts elaborating on this idea, showing why it can produce quite good results (and define what good results means in this context) and, perhaps most importantly, expand it to be applicable to modern corporation settings.
Till next time.