The onTap Framework is Ugly!

The onTap framework is ugly. Yep. You heard me, I said it. The onTap framework is ugly, and I don't mind admitting that...

As a programmer I spend a lot of time thinking about the "most elegant" way to solve a problem. Although in reality it's not really the most elegant way, it's just what I feel is the most elegant way that I can think of and that's currently available to me with my resources. It's always possible that someone else might have thought of a more elegant solution, or that there might be a more elegant solution that's simply not available to me, often due to financial constraints. The release of recent versions of ColdFusion and the addition of application-specific mappings actually resolved a number of ongoing issues I had personally with code I felt was "ugly" or "inelegant". And that's not the only time a ColdFusion upgrade has helped me to clean up something I had always struggled with. The addition of onMissingMethod made possible a long time dream for me of having a lazy-loading function library that could load utility functions on-demand. And don't think I'm being hyperbolic when I say "long time dream" -- I was trying to accomplish that with ColdFusion 5, immediately after CFSCRIPT was introduced and made custom functions possible in the first place. Yet with all the advancements to the core language, I still routinely struggle internally with this notion of "elegance".

Part of the problem is the way that people think. Scientists used to believe that humans followed a "path to action" like this: think -> do -> feel. So in this model, you would think about what you're going to do, do it and then afterward you would decide how you felt about that action. Was it good or bad? Should you do it again? This is a very logical way of handling the world, however, it turns out to be the opposite of the way we actually behave. Our actual paths to action (and this includes us programmers) looks like this: feel -> do -> think. This path is not rational, but it is very, very efficient, which is why our brains evolved this way. It's also the cause of what I've called opinion driven development (ODD). Andre Marquis explains how this works in this video here. In this model we have an emotional desire to do something like eat or play a game, we do it, and then afterward we rationalize that decision. Usually we create a "logical" explanation for our actions which is incorrect, because it assumes our actions were inspired by reason instead of our emotions. Even the belief that we behave rationally is inspired by our emotions -- it's uncomfortable to us to think that we might behave irrationally, regardless of how strong the empirical evidence is. And it's that discomfort, that very emotional discomfort, that makes it difficult for us to admit to irrational behavior. Although we can develop ways of thinking about things that allow us to entertain these ideas without that discomfort, specifically by developing a "growth mindset".

[More]

Software Engineering and the Learning Curve

Have you ever considered the age-old nature versus nurture debate? You should. What is it that makes a person a great software architect? Is it an innate talent that some guys have and some guys don't, determined by genes and the size of your brain? Or is it determined by a person's passion and a persistent effort to seek challenges and overcome obstacles?

I used to think that I wanted to work with the best. I say I used to, because I've come to realize that this idea has caused some problems for me in the past, so now I've changed my mind about it. If you know me at all, you're probably at least marginally familiar with my interesting job history. Yet in spite of these challenges including not having a degree, I'm still an Adobe Community Expert today. This itself is only true because I continue to push myself and in the past few years, more specifically because I've started pushing myself in an area unrelated to software: personal development and communication skills.

Joel Spolsky obviously wants to work with the best. Who wouldn't, right? He's talked about this both on his blog and in Inc Magazine. If you read his articles on the subject, there's a particular way he talks about his new hires and the way he courts them that used to make me think "wow". However after reading Carol Dweck's book Mindset: the new psychology of success, now I'm thinking "uh oh".

[More]

Honor their memory by Living for Today

Last week an ambulance rushed my brother-in-law, Josh Davis, to the hospital for what they thought was a heart attack. At the hospital doctors discovered that he had four aneurisms on his aortic valve and rushed him to surgery where they gave him a synthetic replacement valve. He didn't wake from the anaesthesia. Josh was in his late 40s.

Early this morning around 6am my fiance's brother, George Singer, experienced coronary failure at the age of 39. Over the course of the week my fiance and I have also received news that four more of our older family members have been admitted to the hospital with serious health concerns, including my grandmother (on my father's side), my grandfather (on my mother's side) and both of my ex-wife's parents.

Although we live with mortality, I don't believe mortality defines us. I believe what trully defines us is how we respond to the events in our lives. Both of these deaths were trully unexpected and they remind me that the time we have is precious. None of us knows how many days we have left - today is the only day, now is the only time. We should always spend the time we have wisely, improving ourselves, improving the world, and reminding our loved ones how much they mean to us. Every day should be our Oscars.

[More]

Getters, Setters, XFAs and YAGNI

It occurred to me about the time I decided to post this blog that this is somewhat similar to a post from Adam Haskell's blog earlier yesterday. In Adam's blog he's talking about "getters and setters" in our CFCs (also known as "accessors" and "mutators" respectively).

Rant in this case is an appropriate description. That's not necessarily a bad thing. Sometimes it seems like we need a bit of a shake or maybe a shock to make us examine our habits. Habits are value neutral, they're neither good nor bad -- but once we've developed them they are awfully hard to break. And sometimes we really need to take a second look at our habits to see if they're really serving us.

Adam's premise is simple. Ninety-plus percent of the getter and setter methods we write in our CFCs don't actually do anything that's necessary. And so the YAGNI principal (Ya ain't gonna need it) would say, you shouldn't write them. The idea behind YAGNI is pretty simple and it's actually a fairly astute observation about human nature. As we're looking at a task we can see all sorts of problems that may crop up... but that doesn't mean they will. And humans aren't very good at predicting the future, so we say "don't prepare for problem x until you have problem x because otherwise You Ain't Gonna Need It". It's precisely the same as "don't put the cart before the horse" and "we'll cross that bridge when we come to it".

Personally I think in the real world we need a balance of both although balancing those kinds of things is particularly challenging for humans because we're such creatures of habit. We're much more inclined to develop either a YAGNI habit or a Big Design Up Front (BDUF) habit and stick with that.

In my case I agree with Adam that we shouldn't be writing "useless" getters and setters. And at the same time I also agree with others that we need that encapsulation (and so I tend to shy away from "this" although I have made some exceptions). That's the reason why I use dynamic setters and getters, which solves both problems at once. Yes you can have the best of both worlds -- it's not a black and white problem and your choices for solving it are not black and white either.

Which brings me to my own pseudo-rant. I say pseudo because I'm not really going to be ranting about this, just pointing out the similarity to Adam's rant. It's kind of funny and appropriate that I'm comparing this to one of Adam's rants since he's the new project lead for the Fusebox framework and the XFAs I wanted to talk about are a concept that originally came from Fusebox and has since spread to all the popular framework communities. And I'm going to say something very similar to Adam here.

You Ain't Gonna Need It. So why are you spending all that time writing XFAs?

Now XFAs have their place. When you have a view template with links and those links are context-sensitive, then yes, absolutely, you should have some standardized way of swapping out the default target page for another page.

The problem is not that we have XFAs. The idea of XFAs by itself is a good idea. The problem is that a long time ago it was decreed that all navigation must go through XFAs. And hence we developed a habit of doing it and of thinking about it as being a good thing all the time. But the reality is that well over ninety percent of the time, it's busy work. That's because the vast majority of view templates are NOT reused in a manner that makes those links context-sensitive. So we really should only be creating XFAs in those unusual cases in which we're actually using them. If a particular view is included in one page in your application, there's no need for XFAs. Even if it's included in two or three pages, there may still be no need for XFAs if the links don't change. Only create XFAs when you have an actual need for them.

All that being said, the onTap framework offers a few different methods by which XFA-style navigation can be injected into views. They're parts of the framework's templating engine which I've described before as an "HTML abstraction" and they're pretty easy to use.

<cf_html>
<div class="menu" xmlns:tap="xml.tapogee.com">
<a href="?netaction=stuff">
<tap:url name="netaction" value="xfa.somethingelse" />
stuff
</a>
</div>
</cf_html>

This is a pretty simple example. The tap:url tag allows you to designate whatever variable you want to inject into the href attribute. If the href attribute already has a parameter by that name, the tap:url tag will replace the original value. There are several other ways that links like this can be altered including XSLT which is a pretty powerful method that allows the link tag and surrounding HTML to be modified in other ways as well.

Cognitive Science - It's Not Just Good, It's Good For You!

I've been talking a fair amount about cognitive science lately, particularly with regard to its relationship to usability / user interaction and with the nature of programming as a profession.

The former relationship should be reasonably obvious - users are human (we hope... most of them anyway, even including pointy-haired bosses), and if we want them to use our software we have to create it with interfaces that are easy and useful, taking advantages of the quirks (strengths and weaknesses) of human thought. We need to know, accept and embrace not only that users want things to happen quickly, but that they need to be named and labeled in ways that will make sense to them at first glance WITHOUT thinking about them. (There's a reason why Steve Krug's usability book is titled "Don't Make Me Think".) Unlike programming tools, we need to make most decisions for our customers in advance based on common assumptions about what our users are LIKELY to do, not what they MIGHT want to do (you handle the "might wants" with additional tools on the back-end, not by forcing them to make decisions up-front).

These are all lessons that Microsoft and Seapine have yet to learn, despite their years of operation. Although the creators of Subversion and TortoiseSVN have learned them! Where Seapine's "mature" Surround SCM is constantly bonking me on the head with a librarian's desk-reference, with numerous annoying dialogue boxes that I mostly don't use any time I want to as much as glance passively at a file, TortoiseSVN only shows me a dialog when I ask for one, and provides me with helpful information up-front (files not currently checked in when committing) that's impossibly buried in the Surround SCM client.

But although I constantly champion the notion of programmers improving their usability skills, this article isn't really about that. This article is about understanding cognitive science so that we can have a better understanding of our own habits and the reasons why some things repeatedly bite us in the ass. What's most interesting and ironic to me about cognitive science is that programmers are CONSTANTLY talking about it ... they just don't know that they are. I actually didn't know much of anything about cognitive science until this past year or so. I started reading books like Dan Ariely's predictably irrational and I realized that, although I have in the past been rather snarky to Larry Lyons on the cf-community mailing list, I probably should have been asking more questions! Although I'm not sure what his official degree status is, Larry studied behavioral psychology in college.

So why all the hubbub?

It's important. It may be more important than studying programming theory or design patterns!

Why?

Consider that your mind is merely a program that's been designed by millions of years of evolution. That program has evolved a number of its own design patterns - ways in which the mind works relatively consistently. These design patterns have evolved over these many millions of years to produce people like you and I who are very well adapted to survive and even thrive in a ... NON-AUTOMATED world. By this I mean to emphasize that we have not evolved to be good at programming computers - far from, computers if anything seek to replace the tools we've naturally evolved for survival in a pre-modern world. So our mental machinery isn't designed for programming - it's designed for things that are far less abstract and as a result, what "works well" for programming computers is often very different from how we've evolved to think.

So where does that put us?

Programming computers without at least attempting to understand cognitive science is like taking a cave man, sitting him in a Swiss watch factory (pre-quartz) and expecting him to assemble fine time-pieces with no training. His brain hasn't evolved to handle it and so he'll work on those watches using the same design patterns that evolved for him to work on hunting and gathering for food. Will he eventually figure it out? Yeah, sure. But the quality of his product will be sub-standard.

Cognitive science is the training to understand the brain's built-in design patterns and how to apply them (or more importantly avoid misapplying them).

What's happening in the software industry is that a lot of people are talking about cognitive science. In fact, they're RANTING about it all the time! They just don't know that's what they're doing. And they're demanding that things change, without first trying to understand why they are this way already! They're looking at the cave man in the watch factory and they're wagging their finger at him and shouting "shape up! You need to produce better quality!" But they're not offering him the training that would actually help.

Here are a few examples:

Every single one of those articles should have made some mention of cognitive biases... yet none of them did.

And I'll go a step further and I'll give another example from my own experiences. Over the years I've received a number of emails saying "x is bad" or "x is a bad idea" ... If you read the above examples, you can see some of that with regard to the "active record" design pattern. The problem is that "active records are bad" is the way our brain is designed to think -- it's simple and it's concrete. The reality is at the extreme opposite end, much more akin to what Sean Corfield says in response to the question "which is the best framework to use", i.e. "it depends". The question "which is the best", sets up the other person with the expectation that you'll give a singular answer - simple, concrete, the way we're designed to think - irrational. The active record design pattern can't be good or bad outside of some context. Every design pattern exists for the purpose of addressing a specific problem or set of problems - they have advantages and they have drawbacks or "consequences".

It's important to note that the evolution of human thought bears no resemblance to "logic". There are logical reasons for our having evolved to think in particular ways. Those reasons are obscure and counter-intuitive. Moreover logic itself has never been a survival strategy for a species. The fact that we can be logical or rational doesn't mean that we are very often, particularly because logical thinking isn't really an advantage to our survival. Logical thinking doesn't much help the cave man to find food or avoid predators. Our "irrational" gut instincts are much more effective at managing those tasks. Similarly logical thinking in the face of an irrational, pointy-haired boss generally doesn't win you brownie points at work either.

As such we've evolved certain design patterns like a "herd instinct". Other people (not myself) will lambast humans in general for even having a herd instinct, referring to them as "sheeple". I don't take that approach. I prefer to think of the herd instinct as "friendliness" and "social cohesion" - it is what allows us to build communities and do big and bold things like building projects that took more than a generation to complete. If not for our desire to band together, none of those projects would have gotten off the ground! It is a good strategy for those things. But my desire isn't to blame people for thinking irrationally ("sheeple"), it's to understand how and why we think the way we do and to encourage others to study thinking as well as a means of not just improving, but elevating our work.

The consequence of herding of course is that it's not a great strategy for growing a society of "free thinkers", leading us to the comments Machiavelli wrote about "innovators" in his biting satire the Prince. The effect of confirmation bias means that we tend not to think of the drawbacks or "consequences" of design patterns. We tend only to think about how they solve our problems. So what are some of the other drawbacks of the "herd instinct" design pattern? It discourages good programmers from implementing great new and innovative solutions to age-old problems for fear of reprisal.

Here's an example from outside the software world. Rabies has been a death sentence since men first started walking upright, and there's still no truly effective treatment. Jeanna Giese is the worlds first full-recovery from rabies. Her doctor had never seen a case of Rabies before, and it's probably a good thing! If he'd been a rabies expert, she would be dead! Why? Because the conventional wisdom in the field is that there is no hope for rabies victims. The standard treatment is pain-killers. And although she is a fantastic example of both the outsider effect and the problems of relying on our education, when Dr. Willoughby spoke about his Wisconsin Protocol at the international rabies convention, other highly respected doctors told him that it shouldn't be used until there is laboratory evidence of its effectiveness! And this is a condition in which, the alternative is guaranteed death!

That kind of fantaical, devout and thoughtless reprisal is hard to swallow. It's precisely what Machiavelli was talking about in the Prince.

There's something else that comes out of the example of Willoughby's rabies protocol, and here's a real kicker. These doctors are also not learning the lessons of cognitive science. Well why should they? They're physicians, not psychologists! And they're doctors! They're beyond all that mushy stuff that gets in the way of either real science of helping the patient.

Aha! But that's the problem in a nutshell.

The fact that we (doctors and programmers) think we're above or beyond the problems created by cognitive bias is merely another example of cognitive bias! And it's the real crippler. It's called the overconfidence effect. Nearly everyone, without respect to age, sex, religion or PROFESSION, believes themselves to be less susceptible to dangers than anyone around them. Every smoker believes they're unlikely to get lung cancer. Each of us (myself included) believe ourselves to be more rational and more logical than our peers... but we're not.

The problem here is that the "overconfidence design pattern" has been quite effective for us in the past in helping us to thrive as a species in a pre-modern world. It does not however help us to program computers, in fact, it hurts us. We're apt to look at a blog entry like this one and think "that's interesting, but it doesn't apply to me -- I always judge the pros and cons of design patterns appropriately". Do you? It's not likely. It's not likely that Hal Helms does or Sean Corfield, Joe Rinehart, Matt Wooward or Peter Sommerlad.

That's the reason why we end up with blog entries that say things like "active records are bad". Really? That statement makes no sense. It's a result of the oversimplification design pattern. Without a context of the problem it's being used to overcome and the impact of its consequences on the remaining application, you can't say that they are good or bad. What's bad is slavishly applying them where they're not very useful or slavishly avoiding them where they would be.

It's that slavishness that causes Java programmers who come to ColdFusion to complain about its STRENGTHS like its lack of strict-typing or for any programmer to praise the virtues of case-sensitivity. (See Sean Corfield's CFUnited presentation. Heresy! Embracing Duck Typing in CFCs.)

And THAT slavishness is a result of the design patterns in your brain!

Cognitive science.

I CAN'T stress it ENOUGH.

Credit Where Credit Is Due

A while back I commented that I've been working with ColdBox recently and "not seeing the magic"... but moreover, I made the comment that I'd been seeing a lot of articles where people were raving over how easy it had made their job when it had in reality done either nothing, next to nothing or in some cases less than nothing for them (i.e. busy work). Though I didn't actually explain what I meant by this comment... so here are a couple of examples:

Back in April, Matt Quackenbush posted this article where he talked about "mapped views" in ColdBox. By way of explanation, ColdBox uses these handler.cfc files where each function is an "event", so for example the url index.cfm?event=home.login executes /handlers/home.cfc->login(Event). Then within that login function, you have to set the "view" for the event, which is the name of a template in the /views directory. It looks like this:

<cffunction name="login" access="public" output="false" returntype="void">
<cfargument name="event" required="true" type="ColdBox...requestContext" />
<cfset event.setView("loginForm") />
</cffunction>

So in Matt's description, he's talking about having some common forms that have to be built every time he creates a new site or application for someone. The login form actually is his example. And he doesn't like having to copy and paste those forms from one application to the next since they don't really change. I can appreciate where he's going with this...

He loses me however when he starts talking about how easy it was to create a RequestContextDecorator.cfc to wrap around the event, which then checks to see if the first character of the setView arguments is a / so that he can tell the framework to use the mapping... A whopping 43 lines of code later, in his own words, "That's all there is to it, folks. Yet another task made simple by the power of ColdBox!" ...

umm... Matt? 43 lines of code was easier than this?

event.setView("loginform");
...
<cfinclude template="/globalviews/loginform.cfm" />

ColdBox didn't actually make that job easier -- it made it harder. It added a learning curve where none was needed. And in the final analysis, there's no functional difference between his decorator and just using a local view template with an include -- except that the decorator will be less mechanically efficient.

Then just the other day Will Tomlinson posted on the cf-talk list a subject titled "MG is so cool!" So I had a look to see what marvelous new feature he was going on about... turns out, he was raving over the fact that Model-Glue can turn this:

<cfset application.myappsettings.mysetting = "foo" />

Into this:

<modelglue>
   <config>
      <setting name="mystetting" value="foo" />
   </config>
</modelglue>

Which is then later retrieved via:

<cffunction name="someFunc" access="public" returnType="string" output="false">
   <cfset var theSetting = variables.config.getConfigSetting("mysetting") />
</cffunction>

After having gone through two other methods of setting that same variable before finally settling on this.

umm... okay. I'm not seeing the magic.

Yes I understand encapsulation. That's not my point. The point here is that nobody's really benefited from the extra learning curve here, in spite of Will's enthusiasm for a central config file.

I can only imagine this is a result of endowment. I know that as a species we definitely view things differently once they've had time to "grow on us". I honestly get much the same feeling from people who rave about Eclipse. Others I know in the IT industry tend to describe it with the quirky phase "drinking the Kool-Aid", a reference to Jim Jones.

Have I done it? Probably. I am human and I'm pretty certain that means I'm endowed in a number of ways. As a matter of fact, I encourage you to let me know if you see me saying something like this that looks like it's inspired more by endowment than by the event itself. I'd be interested to know how this phenomenon affects me. :)

Anyway in both of these cases, large amounts of credit seem to be given to frameworks when any credit if deserved at all seems to belong to the ColdFusion server. And really, giving accolades like this to the framework authors imo even kind of cheapens them too. If you're going to give them accolades, give them accolades for things that actaully are spectacular like Mach-II's integration of multi-threading across several different CFML engines, ModelGlue's scaffolds (which I don't care for and so haven't used), etc.

the Black Swan

Look what I found! By accident no less... A couple weeks ago I had paid for an advertisement on BlogAds.com and after having it up for a week I was actually rather disappointed with the result. I'd paid about $2.85 per click though to woohooligan.com, and of course, none of those clickthroughs converted. OUCH!

However as I logged on blogads.com to check my stats, I noticed something in the sidebar that cought my attention and discovered that the site also is the home of the founder Henry Copeland's blog. The other blog entries I read weren't terribly useful (interesting though they were), but I scrolled down and found his bio in one of the entries.

In his bio, another link caught my attention, amongst a list of his favorite books, labelled Fooled by Randomness. And here I discover that the Black Swan, a book I've been interested in reading (but hadn't yet ordered) is actually the #1 best selling nonfiction book on Amazon.com published in 2007. So how did I discover it? Amazon suggested it when I ordered some other books...

Here's the crux. Nassim Nicholas Taleb, the author of the Black Swan just happens to be saying much the same things I've been saying recently... It all goes back to the outsider effect -- how it is that the best inventions are created by people who work outside of the industries they revolutionize precisely because they are free to experiment with techniques and ideas that the industry prescribes as blasphemous. The difference is he's not talking specifically about software, he's talking about everything.

It's some food for thought, in light of my comments in Iron Man and about the 80% failure rate in The Devil Went Down to Silicon Valley.

Compare those articles to Taleb's article in Forbes magazine.

p.s. Henry Copeland's Bio is interesting too.

Keep It Simple

Security frameworks for web applications (if not software in general) seem to be an area in which programmers frequently over-think the problem.

A security application checks to see if the current user authorized to perform a given task, whether it's viewing a page or updating a record. The application needs to know two things - 1) who the user is 2) what they're trying to do. From that it needs to return a simple boolean value, yes they are or no they're not allowed to do x.

At all the places I've worked, the application security framework has been far more complicated. I'm not talking about roles -- those are actually a good idea. Roles are a logical grouping of the company's business rules, so internally the framework ought to be able to determine what roles the user belongs to and if those roles are permitted to perform the task. No, it's after roles that the problem becomes inflated.

Usually, I see people inflate the complexity of security by inflating the notion of the task. The task x becomes a site section [s] and a page [p] or a context [c] and then an action [a]. Typically, the action becomes a canonical list like read / write / list / execute. This makes more sense with an operating system, where the security framework's role may only be to guard the file system and therefore these actions apply in all contexts.

In a web application it's kinda silly because those actions won't apply in all or necessarily even in most contexts. Take for example an application which stores sensitive information about users, such as their social security number. A given user may be able to list the other users and view their names -- they may even be able to view the user detail which contains their social security number, yet not be allowed to view the social. In this case "read" may be the only action relevant to the social. Or there may be a content management system which allows only certain users to approve content -- in which case "execute" may be the only relevant action.

So in a typical web application security suite, you might see Security.checkPermission(user,"adminSection","products","edit") which determines if the current user is allowed to edit products in the admin area of the application, where "list" and "read" are both part of the available permissions, but neither are used because you want everyone to be able to see your products.

Some years ago Peter Cathcart Wason developed a cognitive science experiment generally known as the "2-4-6 problem". Participants are given a set of three numbers called a triplet (2-4-6) and told that these numbers follow a sequencing rule. They're then asked to deduce the rule through a simple game. They create another triplet and the experimenter tells them if their triplet is valid under the sequencing rule. When they believe they know what the rule is, they simply state the rule and the experimenter tells them if they've answered correctly and won the game.

So the guy would come in and be given the initial set, 2-4-6. He'd then make his triplet, 6-8-10, be told it conformed to the rule and call out his answer "even numbers". Overwhelmingly, most of the participants in this experiment guessed that the rule was "even numbers" or in some cases even more complicated rules like "counting up by twos". Most people failed to find the correct answer, which was "ascending numbers". The reason most people failed to find the correct answer in this test is because most people only proposed triplets they believed would be valid. Finding the correct answer to the test requires checking sequences you think won't be valid, until you've ruled out all the alternatives.

Programmers over complicate security frameworks because they only envision scenarios in which their initial concept is relevant. And then later when they discover another scenario they end up having to revise the security framework and make it even more complex to accommodate the new scenario.

The Members onTap plugin provides a security framework in which there are never any non-relevant items. It's accessed via a function request.tap.PolicyManager.isPermitted(task,user) which returns a boolean, yes they are or no they're not allowed to perform this task. These tasks can be anything - absolutely anything you want. And they can also be nested (using a forward-slash / character) or not. They're "auto-wired" into the application by using the path to the current base template as the default task. So for example, if a user is viewing the page /admin/product/edit.cfm, the application will deny them access if they don't have permission to perform the task "admin/product/edit". But that task could just as easily be a custom permission that has nothing to do with the context of the application's file structure, like "$myCustomPermission", where the $ is used to ensure the permission doesn't conflict with an existing file-based permission.

This also accounts for the site-section or page-section as well in a manner that's much more flexible than typical application security. Where the typical system would have an explicitly declared "site section" and/or "page", it would only allow a nesting hierarchy of one-to-two levels deep. The onTap framework's permission system, by omitting these as considerations, allows permissions to be indefinitely nested (which in practice is only likely to be at most about 5 levels). Thus when you test for the permission "admin/product/edit" you know that it is automatically testing the permission for "admin/product" and "admin" first and denying access to the nested sections if the user isn't allowed access to its parent. In other words, to "administer products", you first have to be allowed to "administer".

This is again, like the invention of the stirrup. It works and it works well across many contexts, precisely because it's simple and makes minimal assumptions about the environment.

The Amazing Mind of the Iron Man

Although I will say that I thoroughly enjoyed the new Iron Man movie, that's not what this blog is about. This blog is about software. Wha?! I think the software industry as a whole has become fairly stagnant in recent years. It needs to be invigorated with not necessarily some new blood, but at least new thinking.

Most of the people in the software industry even if they agree with my sentiments about new thinking seem to feel that the answer is AJAX or Web Services or Web 2.0 (which is largely a buzz-word) or some combination of these things. That's where I disagree. The problem is precisely that they've labelled one or more of these things as "the solution". This is a problem because labelling "the solution" is a very different thing than labelling "a solution".

Labelling "a solution" means you've examined a specific problem and resolved it in the present tense. Labelling "a solution" leaves open the possibility that it may not be the best solution available. Labelling "the solution" means you've decided arbitrarily that it's the best solution for not only this problem but any similar problems in the future and it's the only solution you'll use. You've closed the door on any future discussion of the possibility of alternative solutions. You no longer examine the problem at hand and simply apply the golden hammer as prescribed.

Peter Sommerlad has this to say about design patterns:

Sommerlad: I feel guilty as an author of many patterns and supporter of the pattern community because I've come to the conclusion that -in general- Design Patterns (DP) are bad for software design.

Aha! See, I was right! Oh wait... did I say that out loud? Anyway... no, I'm not just going to stop at confirmation bias, there's a lot more to this.

Sommerlad: You might ask why the splendidly successful Design Patterns book [ GoF ] is bad for software design? ... in the early days of OO programming, only guru level people were actually designing working OO systems and the average programmer was stuck with BASIC, Pascal or C.

Note that the three languages described as available to the average programmer were all procedural languages (though there is an OO iteration of Pascal used in Delphi and C had already evolved into C++). One thing I will say here though is that, although Sommerlad may (or may not) disagree, Object Orientation is simply not required to create encapsulation (cohesion and loose coupling). There was quite a lot of good encapsulation and cohesion done with ColdFusion 5 and there's an awful lot of very poor encapsulation and high coupling being done today with ColdFusion 8.

Sommerlad: Those were the people that invented the architectures that later became popular Design Patterns. The gurus were able to conciously think about their design decisions or already had the experience to decide between good or worse designs by a gut feeling. In addition they came from a time when ... principles of simplicity, abstraction, structure, encapsulation, coherence and decoupling were well known ... at least by the people considered capable of good software design.

Today, Design Patterns allows average developers to design OO systems and get them working that would have been beyond their design capabilities before. This sounds like a great thing, but the relative lack of expertise or brilliance can easily result in bigger software design desasters with DP applied than without.

Most of the Design Patterns in the GoF book are about introducing flexibility by indirection and inheritance. This is great when you use it to reduce code size and simplify logic by applying polymorphism, but in the hand of the uninitiated Design Patterns are a tool for overengineering and introducing unnecessary complexity. A feeble designer that cannot decide on a system property will use Design Patterns to postpone too many decisions, will speculate about features never needed and will lay a heavy burden on implementers and maintainers of the system.

This reminds me in particular of something I've said before, and I'll say it again here. In the world of web software in particular, we've become accustomed to a situation that the users of other software would never accept. It is a situation that I believe we should not accept and for the very specific reason that it causes major headaches for us as programmers and monetary loss that affects the bottom line for our companies. The users of standard desktop software would never accept software that required programming to install it. Imagine the horror of your relatively computer illiterate aunt upon reading the installation instructions for Microsoft Office if it involved "step 1 - Press the start button and select Run - enter REGEDT32 and press enter ..." However, this is in essence what has become standard to ask of people who've purchased our web-based software.

As a matter of fact, as much as I dislike Eclipse, this is one place where the authors got it right. Eclipse is one of the few applications even for the desktop in which the installation is one step: unzip. Though even the average desktop application at a minimum provides an installation "wizard" that guides the user through the installation process, insulating them so that the software does all its own programming work.

But the situation of requiring programming work to install software actually becomes even worse when we're talking about web software than when we're talking about desktop software. And this is where it comes down to the bottom line for companies with regard to "total cost of ownership". It is significantly more expensive to own a typical web application that is advertised as "easy to modify". Why? Because when web applications are modified in the way that most of ours are modified (editing someone else's code) they become much harder to upgrade at a later date when the original authors release newer versions.

This is a part of the "heavy burden" on maintainers Sommerlad is talking about. How many times have you seen a company that purchased a web application that was "almost" what they wanted, made "minor changes" and were then stuck with that system for the forseeable future, unable to upgrade because they couldn't take the time to merge their modifications with the new version and test it?

To date, the onTap framework is the only framework I'm aware of that actually addresses that issue, and it does so specifically because I've taken a Tony Stark approach to development which I'll talk about in a minute.

Sommerlad: Not only the GoF book is a reason for this situation, but also its use in training and education by teachers inexperienced in OO programming. Often the drawbacks of a Design Pattern are not explained well enough by the GoF or are omitted by readers or teachers, since DP are perceived as the OO design panacea. One reason for the obscurity of some of the issues with DPs lie in the aged form of the GoF style. Most modern pattern books provide a style that more clearly shows the problem and forces resolved and the downsides of the solution. Another issue is the often exclusive focus on the original 23 Design Patterns without showing students the breadth of pattern literature where better solutions for their design problems might be presented.

Human beings are notoriously biased. I will admit that I haven't been studying it for very long, but I believe all software engineers really should study cognitive science because it can help us to understand when and how our thinking is consistently failing us.

For example, I know that my perception of a block of code is very different if it's my code than if it's someone else's code. In particular, I'm apt to perceive that poor conventions in my code address specific problems while perceiving that poor conventions in someone else's code exist because the other person has bad habbits. More specifically, see my recent comment on some code in Luis Majano's ColdBox framework. This is an example of the actor-observer bias, which put simply goes something like this: "if others do it, it's their fault - if I do it, it's not my fault, it's because of my circumstances". I suspect that the actor-observer bias is actually a product of the general availability heuristic - we perceive others as having "bad habits" and ourselves as having "bad circumstances" because while our own circumstances (and solutions) are consciously available to us, it takes a little digging to uncover other people's reasons. Since we don't (or really can't) dig to get at the reasons for others, we generally perceive them as being less rational (among other things). I can think of several commedians off the top of my head who've been quite successful with routines involving pointing out the stupidity of others, such as George Carlin, Gallagher and Bill Engvall. Plus we've also got cognitive dissonance ensuring that we'll have a more positive view of ourselves and our own abilities most of the time (depression notwithstanding).

While I can recognize it in my writing, this doesn't mean that I'm immune to it. In fact I'm certain I do more of it than I recognize - we all do, that's why it's a "human" bias. They affect everyone, including the researchers who study them. Humans are just not very objective... but we aren't objective not because we're "faulty" in any real way -- quite the opposite. We're inobjective because these biases, while making our perception innacurate, also help to ensure our survival. In an evolutionary sense, the traits that are most advantageous to survival in a population are the ones that generally become dominant over time. So we're inobjective precisely because these are rather literally "healthy illusions" to have. Both Richard Wiseman in his book the Luck Factor and Martin Seligman in his book Learned Optimism, two of the world's most eminent scientists, have shown both statistically and through experimentation how the majority of people are overconfident and that, in spite of the occasional pitfall, overconfidence is actually an advantage to our survival, health and success in life.

But we haven't actually evolved to write software. That is to say, we've not evolved such that the traits that are advantageous in software engineering have become dominant in the population. That's a good thing for us engineers! If we evolved to be better software engineers as a species, then you and I might be out of a job. :)

Indeed our biases negatively influence most of our software. For one thing, we suffer from a rather nasty endowment effect. What is the endowment effect? Simply put, once you own something, you overvalue it. Granted it's not always applicable, but much of the time (if not most of the time) it's pretty accurate. And it applies equally to ideas (or ideals) as it does to physical posessions like cars or homes, which is why software engineers sometimes sound a lot like religious zealots. Note the ardent furvor of the open-source Ubuntu zealot as he verbally vivisects the casual user of Microsoft Office for his ambivalence to the EVIL corporate empire and its Sith master Darth Gates. Of course Microsoft flunkies are often equally insular. And as such the techno-cultural war in middle america resembles at least in spirit the neverending wars in the middle east... or alternatively the trilogy war in Clerks 2.

But just as endowment influences our subscription to Windows or anti-Windows camps, it also influences everything else we do in software engineering. What this means is that once we've decided on "the solution" as I mentioned before, we significantly overvalue "the solution" because it's "the solution" and not "a solution". And this is precisely what Sommerlad was talking about with regard to design patterns -- the tendancy they have to become golden hammers. When Sommerlad mentions the fact that people often omit any discussion of the drawbacks of a pattern, he's also describing at least in part the effect of confirmation bias.

A while back Matt Woodward also posted a good and resaonably thorough blog in which he talked about the same phenomenon. In short, there've been some general grumblings over the use of the "Gateway" design pattern in ColdFusion. Some ColdFusion programmers wants a gateway to return a collection (an array) of fully instantiated objects, which is common practice in Java.

Herein lies a huge problem for either people coming to ColdFusion from Java or ColdFusion programmers who are interested in learning the way things are done in Java, because simply moving code from Java to ColdFusion line by line is a bad idea. Why? Because ColdFusion isn't Java - its strengths and weaknesses are very different than the strengths and weaknesses of Java. For starters, instantiating objects in Java is a pretty efficient prospect, but an object (CFC instance) in ColdFusion is a lot less efficient. So where Java programmers can get away with the inefficiency of instantiating several hundred objects returned from a search all at once (even when only a handful of them will be used), the same technique is dreadfully, painfully slow in ColdFusion.

Though where Java often forces you to use try-catch all over the place, causing java developers to often develop a habit of adding try-catch statements out of habit instead of adding them for a reason, ColdFusion is much more sane in that regard. Hence the reason why if you look at any of the code produced by Paul Hastings, you'll notice that he's littered nearly every method in every CFC he's published with a gratuitous try-catch, which serves no purpose other than to litter the code, slow him down and (albeit marginally) create overhead for the server. He's endowed with the arbitrary and silly notion that Java is a "better" language and as a result his bias for Java informs every decision and he doesn't stop to consider the strengths and weaknesses of the current environment. (I honestly really wonder why he bothers with ColdFusion at all -- he seems to hate it so much, especially for having been a Team Macromedia member and now Adobe Community Experts member.)

So ColdFusion has its own set of strengths and weaknesses separate from Java and yet in part due to endowment, ColdFusion programmers often don't respect this fact. They use ColdFusion as though it's synonymous with Java, which makes for at best mediocre ColdFusion development.

Matt went further actually to show the endowment effect by showing how it is that ColdFusion developers have aquired a relatively unique interpretation of "DAO" and "Gateway" and how actually when these patterns were described in the original GOF book, not only did they not mention anything (anything at all) about databases, but also were virtually synonymous with regard to their descriptions. Indeed, the original GOF book which Sommerlad now says is "bad for software design", said very little about the actual code. We tend to think of a "DAO" as something very specific, but in the parlance of classical design patterns it was actually very inspecific. Not only was it very inspecific, there were strong reasons why it was inspecific. A design pattern isn't about practical application, it's about concept -- the art of the abstract, far removed from the very specific ways that we as programmers typically approach them.

It's a question of mindset. The GOF book was designed for engineers -- but the end result has been mechanics trying to use them. There's nothing wrong with mechanics or even with being a mechanic, it's just a different job and it requires a different kind of thinking. A mechanic receives specific tools that perform specific tasks and he performs those tasks. He has books with specific regulations for already designed and manufactured equipment. The engineer works on the car at the other end before anything has even been discussed with the manufacturing division. When the engineer is doing his job it's not a nut or a bolt or a shock absorber -- to the engineer, it's a weight and a force and a desired outcome. Where the mechanic has to be concerned with fuel distribution, ignition and transmission, the engineer is free to swap out the internal combustion engine for batteries, allowing for the straight acceleration that allowed an electric car to beat a gas-powered Formula 3 racecar in this MythBusters episode.

The mechanic deals with specifics of manufacture because the car is already built and rarely is the driver able and willing to pay the immense sums needed to make major modifications to an already assembled vehicle. The engineer deals with generalities of purpose. It's a totally different problem. The mechanic is asked to "fix the engine" - the engineer is asked to "produce a top speed of 200mph". Those two jobs require totally different ways of thinking.

The intent of software of course is to take the hassle out of our daily lives by automating tasks that we currently perform manually and by allowing us to perform tasks that we previously couldn't, through the application of similar automation. As such, the objective of software can't really be met via the mechanic mindset. The mechanic would be asked to "fix the shopping cart". The engineer would be asked to "reduce the workload of our order fulfillment department by automating their paperwork". The mechanic task doesn't achieve anything new, it only perpetuates the system that exists. The engineering task saves the company money and creates opportunities for the order fulfillment department to be more productive and do other things to help the company's success.

The best software always has been and always will be created by engineers (and especially those engineers who take the time to understand cognitive science and human factors issues like the magical number 7 plus or minus 2). The best software will be developed by engineers who think like the archetypal engineer Tony Stark.

We should all strive to think like Iron Man, but what exactly does that mean? (spoilers follow)

  • Think Options: In the beginning of the film, Stark is captured by a group of terrorists who demand that he build a missile system for them. His initial reaction is one of despair - utter depression. He's convinced that he's going to die and refuses to do any work. Then a fellow prisoner says "this is a very important week for you". At that point he begins planning is escape and out of a box of scraps, he builds the arc reactor that he wedges into his chest to both keep the schrapnel out of his heart and to power his first prototype of the armor. The terrorists did bring him equipment, but for the most part he used scraps. He didn't sit around lamenting the equipment he didn't have. Like Tom Hanks character in Castaway, he looked at the objects he had available and he considered their individual properties - what were they good for? If you broke them or melted them down, or strapped them to something else, what else could they be good for? In the early days of software development, everything was new, and everyone had to think this way, because there weren't already handily predesigned tools to solve every problem. These days the problem is reversed - the "solutions" are too readily available and we fall into the trap of applying them without really thinking through the solution to determine if it's the best we can do (and it often isn't). Like Stark at the beginning of his captivity, we become stuck in the thinking that our hands are tied by these tools we have, forgetting that we made them in the first place.
  • Don't Take No For An Answer: This is actually rather closely related to the admonition to think of options. Tony says "we should look into Arc Reactor technology again," and is met with the response, "we knew when we built it that it wasn't practical... we only built it to shut up the hippies!" He doesn't let that distract him from his purpose. When someone says something can't be done, it means they don't know how to do it, nothing more.

    Several years ago when ColdFusion MX was first released, I ended up going through a hair-pulling session involving the cfstoredproc tags. I mentioned it on the CF-Talk list and Sean Corfield had replied that "it can't be done because JDBC doesn't support named parameters". He was right about JDBC support for named parameters -- he was wrong about it not being doable, and in fact you can use JDBC (2.0) to do it. Over the past few years there's been apparently on and off again support for dbvarname, which boggles my mind, because I had actually fixed the problem myself not too long after I had complained about it. While I didn't launch a major campaign over it (because most of us aren't using a lot of stored procedures), I did let people know. Anyway, it's still baked into the onTap framework's SQL abstraction tools.

    The SQL Abstraction tools in and of themselves represent a large body of work which Ben Forta claimed to be impossible when he said "true DBMS portability is unatainable". While it's true that not all databases support triggers for example, that's outside the realm of the CF application and you can certainly write the cf application to account for the same sorts of things the triggers might otherwise handle. Ultimately, most of what Forta describes as the challenges of database portability are covered in the onTap framework's SQL abstraction tools with (now that they've been optimized) minimal overhead. I also contend with Forta's comment that switching platforms isn't common -- it is, as Hermes Conrad might say, "technically correct... the best kind of correct". Meaning that it ignores the fact that many of us (myself in particular) are designing software to be used by others and unwilling to sacrifice customers simply because they chose a different database platform.

    Far from being a burden, the SQL abstraction tools in the onTap framework actually allow me much greater flexibility with regard to database interaction, letting me quickly and easily build very complex queries that would otherwise be very challenging to read and understand, but to do it in a way that is so straightforward that it's virtually self-documenting. The use of not merely and/or keywords in searches, but internationalized and/or keywords is a prime example. I'm able to do this with one line of code and afterward, not only is the code both plenty efficient and eminently legible, it's also SQL-Injection proofed, because unlike ObjectBreeze, the onTap SQL tools use cfqueryparam. It also means I never have to type a cfqueryparam tag myself, which means both that the sqltype is automated (which reduces coupling in the application) and reduces the amount of code I have to write in general. I was doing ORM before Reactor or Transfer were a twinkle in anyone's eye -- how's that for something that's (according to Ben) "not doable and not worth doing"?

    Stark Ent. Employee: the technology doesn't exist... 
    Obadiah Stane: Tony Stark built one IN A CAVE! With a BOX OF SCRAPS!
    Stark Ent. Employee: well... I'm not Tony Stark 

    Okay, I've beat this one to death, moving on...

  • Think Small: There are actually two parts to this proposition. Thinking small is actually not about the size of the application or its features. Thinking small ultimately is about agile software - the ability to switch gears quickly. The application itself may be very large, but it's built in very small, encapsulated pieces. The Arc Reactor in tony's chest is no more than four inches across. Each individual piece of the suit is tiny, performing only one very specific function - it's only when the individual pieces are put together that the larger suit functions as a whole. The SQL abstraction tools in the onTap framework that allow me to create internationalized and/or keyword search support (something everyone should do, but nobody does), are only possible because each part of the system deals with a very very small aspect of the SQL that's being generated. It's a single comparison in a where clause or it's a single join to another table, etc.

    Though I also apply this to my work in a more literal respect. If I'm working on a template that's approaching 1000 lines of code, I generally feel something's gone wrong and immediately start looking for ways to make it smaller. If I'm generating code with a bean generator like Illudium PU-36 or using any kind of file-merging tool as a standard operating procedure during my work (something I've heard described as SOP for applications created using bean generators), then something is definitely horribly wrong. All of these things - code generation, file merging, humongous templates, create headaches and are a major hassle to maintain and all of them are totally avoidable, and if you learn the techniques that allow you to avoid them, you'll discover they're the same techniques that allow you to do some amazing things like the SQL abstraction system I described before.

    The smaller the code you write (physically), the easier it will be to modify and maintain. Try and think of the code you write like a stirrup. The stirrup is considered one of the most important inventions in human history -- and it involves a very minimal amount of material to create one. In fact, if the stirrup were much larger or more complex, it likely would have either been cumbersome or fragile, either of which would have made it much less useful and prevented its relatively rapid and widespread adoption.

  • Turn It Around: Don't think of "failures". Think instead of learning experiences. If a particular project "fails" it is still an opportunity to utilize the strengths and weaknesses of the experiment for other things. In the film, Tony is working on a "simple flight stabilizer, it's perfectly harmless". At which point he tests the stabilizer and it throws him bodily across the room. That stabilizer later becomes the repulsor he uses to incapacitate terrorists and destroy his company's stolen weapons in the east. Never assume because something isn't working as expected that it doesn't have value - and never assume because something is working as expected that it shouldn't be improved. Has everything I've done worked the way I hoped? No. At one point during the early development of the onTap framework I developed a fairly rudimentary blog for myself using text files instead of a database to store both the blogs themselves and the comments. That project might have actually worked if I had at that time a better understanding of either Verity and/or XML (both of which I understand better now), but although that blog was taken down, I learned some valuable things about file access and developing facades from that experiment.

    Actually, more specifically, one of the things I did with the file management is to create a singular "file" component that can read and write multiple types of files (wddx, text, zip, etc) using a singular interface. And if I had anything to impart to ColdBox at the moment, what little I know of it at present, it would be that they should have used that same approach for the IOC plugin. Right now the latest ColdBox distribution says there are two options for the IOC setting in the XML config file - "ColdSpring" and "LightWire". That's great! Assuming that's all there ever is... The reason for this is because the IOC plugin CFC has those strings hard-coded into its methods, instead of creating a facade that would select the appropriate IOC plugin based on an interface (or an abstract parent class) from a directory in the same way that the framework automatically selects the appropriate event handler cfc. And then there would be a IOC/ColdSpring component and a IOC/LightWire component. This would provide the flexibility intended by the GOF in their OO design patterns, where it currently doesn't exist in ColdBox IOC selection.

Some of you may be thinking that I'm horribly conceited because I'm citing examples from my own work and comparing myself to Iron Man. I'm not saying these things to be conceited - my own achievements are simply the ones I'm most familiar with, they're the ones that spring to mind first when making these sorts of comparisons.

Am I conceited? I do generally speaking find myself to be five minutes into the future with regard to the ColdFusion community - I had polymorphic OO code in ColdFusion 4, I was doing ORM before there were "objects", I was doing much of what's in Spry before there was Spry etc. I've always disliked XML configuration files and preferred Convention over Configuration (CoC) which has become more popular lately with ColdBox and now Fusebox adopting convention-based architecture, and everyone doing auto-wiring in ColdSpring and LightWire. It's typically been only after I've been doing something for a while that someone else in the ColdFusion community popularizes it - Reactor, Transfer, Spry, etc. Both the Fusebox and Mach-II frameworks have recently changed or added features that closely resemble suggestions I'd made several years ago and were shot down at the time. Am I conceited for noticing that my ideas were "ahead of their time"?

Although I may sometimes feel a bit bitter about the way things have turned out, I don't post these things to inflate my ego or to say "I told you so". I don't consider myself an absolute authority or even an expert on most subjects. While I do consider myself to be a very good software engineer and when I post these blogs there is always some hope that it will encourage more people to consider the framework, ultimately I post these things because I love what I do and I want to make people think and possibly spur debate in the hopes that everyone might benefit from it (including myself).

So long story short, think like Iron Man! :)

Thoughts?

Indecisive?

I just read an old blog from Brian Rinaldi. In it, he talked about something he sees a lot amongst programmers who are comfortable with a procedural approach and are just starting to learn OO and being told that they need to choose a framework. Brian sees a lot of people recommend that these "old school" guys write a small application and then port it to each framework they're considering to compare their strengths and weaknesses. Brian disagrees and he has a good point. Many people have difficulty choosing a framework they like and end up stalling for time in the decision.

Brian Says:

If you are sitting back studying frameworks, researching options, reading blogs and looking for the answer about what framework you need to just admit to yourself you are just stalling (go ahead, flame on)!

...

None of this is intended as criticism. I know where you are because I have been there. Nonetheless, this stressing over which framework to use is a stalling technique plain and simple. It is brought about by the fact that moving from procedural to OO is painful and difficult. Once you accept that, just bite the bullet, pick a framework (any framework) and get coding!

It is stalling, that's true... but the reason for it runs deeper than the pain involved in learning OO if you're comfortable with procedural programming. There's a really good book titled Predictably Irrational by Dan Ariely, a professor of behavioral economics at MIT. In it one of the subjects he covers is difficulty of people to choose between options, effectively "closing doors" (although not necessarily "burning bridges"). He gives a bunch of examples in the book from choosing a home or car to choosing a spouse.

He described an experiment he conducted at MIT in which they created a video game that allowed the person playing to win real money (not huge sums, but an incentive anyway). The game presented the players with three doors they could click on and each click would earn them a certain semi-random amount of money. So the players would click on each door a few times and quickly determine which one earned them the most money on average and then stick with that door through most of the rest of the game.

It was obvious that they could pretty easily see which door gave them the best results. At this point they wanted to find out if the players would behave irrationally to keep their options (doors) open. So they changed the game and any time you clicked on a door it would grow and the other two doors would shrink. If you let them shrink enough, eventually they would disappear and you wouldn't be able to get them back. So the best strategy to earn the most money playing didn't change: figure out which door pays the most and click it 'till the game ends. However NOBODY did that.

There wasn't any learning curve or associated psychological pain (as Brian described with regard to learning OO frameworks). The only rational incentive in this case was the money, so a person who behaved rationally should easily have been able to choose the best strategy and earn the maximum amount of cash reward. There wasn't a single player willing to let any of the doors disappear, no matter how little they paid out. Conclusion - not a single player was able to act rationally. The players so feared losing options that they allowed themselves to lose money.

So that being said, I've met a bunch of folks who, entirely aside from the framework question, do the same kind of back and forth over the core languages. Should I stick with CF? Maybe I need to learn more Java or .NET or PHP or all of the above. Maybe I'm not cut out for programming to begin with and I should just go be a line-cook at Chili's. I'm sure I've had that same problem, although I can't recall ever doing it with the question of either languages or frameworks. I don't sit around and wait for other people to decide what's a good way to program, I figure it out and I go do it.

BlogCFC was created by Raymond Camden. This blog is running version 5.5.006. | Protected by Akismet | Blog with WordPress