Why does “crappy” code not perform worse than it seems to ?

I like good code like the next person however I am wondering why people spend so much time focusing on “crappy” code patterns that don’t seem to perform worse than expected ?!?

Here’s a thought:  If you want to complain about “crappy” code then why not fix all the crappy code you become aware of including all that stuff you cannot seem to understand !!!

Or maybe just realize the fact that your idea of “crappy” code may be another man’s treasure !!!   Oh wait, someone already thought of this when the following was coined, “…one man’s trash is another man’s treasure…”. Circa. 1570’s !!!

Or to put this another way…   those who feel some coding patterns may be “crappy” might just be limiting their own ability to perceive really useful code they would surely have overlooked !!!

I am NOT in favor of truly crappy code !!!

But, I am also NOT in favor of being so close-minded that I cannot see how useful alternative coding styles may be.

Single instances of “crappy” code cannot be crappy !!!

If you have to produce 1000’s or 1,000,000’s of iterations to see a performance problem from a single use chunk of code then there is very little reason to think about any such performance problem !!!

I might be impressed with those who rail against “crappy” code who also make darned sure all the code they can see in their field of vision is not also “crappy” code !!!

Scenario #1

For instance, consider the following pattern that was recently classified as being “crappy” by an intrepid band of hearty Python coders in an company we dare not name here…

This is “crappy” !

toks = ‘one two three four five six seven eight nine ten’.split()

This is not !

toks = [‘one’, ‘two’, ‘three’, ‘four’, ‘five’, ‘six’, ‘seven’, ‘eight’, ‘nine’, ‘ten’]

The crappy version was said to be “crappy” because the split was seen as being unnecessary because the non-crappy version was said to be the desired effect.

Those who said the crappy version was crappy have probably never had to load a list from a comma delimited jungle of data they may have had to spend several days entering manually by hand otherwise they may have gotten themselves some useful experience as to why the crappy version might be somewhat less crappy after-all.

Scenario #1 Benchmarks

I like benchmarks.  I like to know when code may perform badly at run-time at scale.

The crappy version for Scenario #1 runs 5x slower than the non-crappy version for 1,000 iterations.

The crappy version for Scenario #1 runs 7.6x slower than the non-crappy version for 10,000 iterations.

The crappy version for Scenario #1 runs 7.65x slower than the non-crappy version for 100,000 iterations.

The crappy version for Scenario #1 runs 7.66x slower than the non-crappy version for 1,000,000 iterations.

Um, if you turn-off the Python GC the performance issues seem to disappear for a while !!!  Just a thought…

Scenario #1 Analysis

The real question is this:  “Was the crappy code being used enough for the comments this code pattern elicited ?”  Probably not !!!

The justification for call the crappy version truly crappy was the performance concerns and there are some rather glaring performance concerns to be sure but ONLY when the crappy version was being used 1000 times more than it actually was being used.

Those who claimed the crappy version was “crappy” had to magnify the usage pattern by a minimum of 1000 times before the crappy version’s performance data might be measurable.

I agree the crappy version would be truly crappy if it were the actual source of some kind of measurable performance issue related to the loss of revenues or some other demonstrable effect that was actually causing some kind of problem.

The problem, as I saw it, had nothing to do with how crappy the code pattern may have been because let’s face it this is crappy code if it were to be used often enough for a problem to exist.

The problem, as I saw it, was a group of people all agreeing there was a problem where no problem really existed at-all simply because in their minds they were magnifying the problem by 1000 times just to be able to catch a glimpse of some kind of problem when there was no real problem at-all.

This piece of crappy code may have a real-world non-crappy use case that could have saved someone a lot of time, given the right set of circumstances related to having to maintain a huge data set by hand that had to be loaded into a list at run-time.  The desire to make this crappy-looking code non-crappy in a use case that could NEVER be actually measured as being crappy is the problem !!!  Far more time could have been spent entering all those commas and quote marks just to make the code less crappy than the effort would have been worth.

Why any person or group of people who are supposed to be intelligent talented software engineers would claim a harmless chunk of code was harmful given the actual use-case that existed in the actual source of the context for the original question which was related to a single use of the crappy version is well beyond my ability to comprehend in real terms.

The person who raised the issue was supposed to have more than 20+ yrs programming experience !!!  He found a single reference to the crappy version in a source file that was probably being used exactly once per iteration of some larger program.  WOW !!!  Talk about yelling “FIRE” in a crowded room !!!

The people who agreed with him were even more of a mystery because these people are supposed to be among the best and the brightest at this particular unnamed company and they went along with the idea that was raised by the one guy who should have known better than to yell “FIRE” in a crowded room.

It is interesting to note, these same people who were able to inflate a potentially crappy use-case beyond the original scope are the same people who are seemingly okay with all the following truly crappy coding patterns they seem to wish to do nothing about:

  • An Eager-loading ORM that maintains and uses exactly 1 Database Cursor per Session !!!
    • Why was this NEVER changed ???

I will stop here with the analysis because I think this one point bears further analysis.

How in the world do these crappy detecting software engineers allow an Eager-loading ORM to exist in the first place ???   And the company wants this sort of thing corrected !!!

I have to wonder about the skills these crappy detecting software engineers actually possess when they cannot find a way to remove the Eager-loading ORM in the first place !!!!

Removal of the Eager-loading ORM would be easy enough, for me to accomplish, but then I can tell the difference between crappy code that is really crappy versus crappy code that only seems to be crappy.

Well You See Timmy…

Now for the moral of this tale…

People who live in glass houses always seem overly eager to throw stones when their own houses have cracks in the walls so wide everyone knows there are problems.

I have no issues with people who can see imagined problems that don’t exist so long as they have their own houses in order but this was not the case in this instance.

These very same people, who seemed more than willing to detect crappy code patterns where there was no crappy code use case are the very same people who seem unwilling or unable to resolve glaring performance issues in a huge pile of code.

The rest of the issues these people could focus on are as follows:

  • mod_python rather than wsgi
  • Django not being used-all but then there is no discernible web framework being used at-all.
  • Eager-loading ORM – easy to resolve with Django.
  • Non-scalable Web App – because mod_python is being used rather than wsgi, for instance.
  • Development environment issues – all developers share a single instance of the run-time – each developer gets a different virtual host but all development being done in a single Linux instance.
    • Okay, this one truly baffles me !!!
    • How difficult can it be to get each developer their own Linux instance at a moment in time when everyone has a Cloud-based solution for doing this ?!?

Look, all these issues can be easily handled but none of them are being handled at-all.  Why ???

The reason(s) all these glaring issues are not being handled is easy… lack of experience and lack of skill in the developer community.

Nobody wants to make any real changes or nobody is able to make any real changes.

Dragging along ancient code from the deep past and then being either afraid to update it or unwilling to update it is more than ridiculous !!!

Solutions !!!

The solution for all this is also easy but very difficult to implement !!!

Rewrite your code every 18 months !!!

Better tools are being churned-out all the time.

Django is a proven Web Framework !!!

Wsgi is a proven technology stack !!!

Python+Django+tornado+wsgi+nginx equals a scalable Web App that scales as easy as you can build an automated process for spinning-up one more Linux Virtual Machine in the Cloud !!!

Or let’s put this another way…

Python+Django+tornado+wsgi+nginx was easy enough for me to handle all by my self – not that I might not have wanted to do this with a team of others – there just weren’t that many others I might have done this with.

The moment I achieved my first stable installation of Python+Django+tornado+wsgi+nginx I knew it was the way to go !!!

Python+Django runs as a separate wsgi web server with performance comparable to what you get from the Google App Engine, oddly enough, and “yes” I have run benchmarks that tell me this based on the data.

Tornado is a stand-alone Python-based Web Server with very good performance characteristics.

Tornado talks to an instance of a Python+Django web app via wsgi.

Nginx talks to an instance of Tornado that talks to an instance of a Python+Django web app via wsgi.

Why so many web servers in this stack ???

Why use Tornado at-all ???

I happen to know a little something I call Latency Decoupling that tends to make web pages serve much faster the more layers of web servers you use.

Nginx connected to many Tornado servers each connected to one or more wsgi Web Apps is far more efficient serving web content than Nginx connected directly to that very same wsgi web app.

Latency Decoupling kicks-in and your end-users have happy faces.

Ability to Scale the Web App also increases !!!

Many instance of the Web App within each Tornado instance !!!

Many Tornado instances within each Nginx instance !!!

Deployment gets easier !!!

Now with a single Python or Ant script you can spin-up yet another Amazon EC2 instance – connect-up the Nginx instances using a Load Balancer of some kind (nginx also does Load Balancing) and before you know it you have architected a really cool Django Cloud Solution that nobody else seems to have just yet.

Build a slick Control Panel for your Django Cloud Users and bingo you have the ability to grow a Web App from a single instance to any number just by clicking a button on a web page !!!

The only other detail would be how you monetize all this into something you can use to generate revenue.

All of this was easy enough to build when you have all the parts.

All of this should be easy enough for any company to use, if only they had I.T. staffers who had played around with these kinds of solutions but alas that seems to be lacking in most companies except for a few.

Too bad most would-be skilled programmers would tend to scoff at most of what’s written in this article as being some form of “crazy”… but then once upon a time the notion of generating electricity was also seen as being “crazy” along with the notion of gravity and quantum physics.  I can live with what others wish to say… so long as I get to build something really cool along the way.

Vyper Logix Corp Makes ITC (Inter-Thread Communications) Easy as 1,2,3 !!!

Take a look at the code sample found in this article !!!

Very Lean !!!

Very Cool !!!

Threaded !!!

Best of all the main process terminates itself once all the work has been done !!!



Code sample was printed using Wing IDE and Snag-It 10.


Management 101 – Make Dashboards

Yes, I am a Manager now…  not saying where because I value my job but this is my innovation for Managers everywhere.

Make Dashboards – Upper Managers seem to love Dashboards and this is all I am gonna say about this.

And who ever said I’d have to have an MBA to be successful in Management, anyway ?!?


Simple improvement – 3D Pie Charts !!!  Gotta love Excel !!!


Animal Kingdom Interview Sample

This comes right from an actual Interview Question with a major Fortune 100 Company and since I passed the interview process I shall not mention any names here…

Let’s just say for grins and giggles, I was more than able to whip-up the high-level view for this code on the whiteboard but then this is such a simple problem from an OOP perspective.

See the code here.

I was told many others did not respond correctly to this same question…

The code presented here work in Python 2.5 or Python 2.7 and probably for Python 2.4 since these versions are of the same general family.


String splits versus lists for static data ?!?

Every so often someone will ask a really interesting Python question, this is not one of those however the data seems to disagree with the opinions.

In some python code, I see this:

switch_fields = “id switch_model ip”.split()

which is surely the same as this:

switch_fields = [‘id’, ‘switch_model’, ‘ip’]

Since the intent is clearly to create a list of three items, why would someone create a string instead and invoke a string method to create a list?

My preference is for the “say what you want” side; I don’t consider myself to be a “Python Programmer”, though.

Some responses from various Python programmers…

I agree with saying what you want. I’d probably ask the author if they would ever be inclined to do the reverse, i.e. switch_field_str = ‘ ‘.join([‘id’, ‘switch_model’, ‘ip’]) to prove a point about how confusing/silly that can be.


     I agree, there is no logical or practical reason to have a static string that has a split performed on it every time the code is run. It is a waste of computational time.

Now for my response…

I was curious…

I had to run over 100,000 iterations to see a measurable difference…

Apart from the run-time differences one might find it easier to maintain code with static arrays loaded from string splits especially if one had to sit and type a lot of quote marks and commas… or maybe I have had far too many late night coding sessions in my past.

Thought I would share the attachment.

Seems the data does not really seem to support the opinions if the concern is computational time and performance… unless you wanted to do the string splits to list thing something close to 1,000,000 times; even 100,000 iterations is nothing you can measure let alone notice.

Eh, such is life.

Node.js Achilles Heel

The good thing about Node.js is all that JavaScript running on my server !!!

The bad thing about Node.js is all that JavaScript running on my server !!!

Hey, if you love JavaScript then by all means use the heck out of Node.js; you may come to realize why Node.js is so weak as a server-side technology.  Heck, almost nobody even cares why Ruby on Rails is so weak and it’s got millions of followers.

Node.js lacks a threading model !!!

So who cares, Ruby lacks a useful threading model too and nobody cares about it at-all.

Ok, to be fair, Ruby 1.9.x does use multiple-threads but Ruby 1.8.x does not. (See also: this)

Node.js has no threading support at-all because JavaScript has no threading support.

As a casual user, or typical Manager, you will never even know or care about the lack of a threading model… none of your developers will either, for that matter.

On the other hand, if you ever try to develop some swift Node.js gizmo that could benefit from a threading model, well let’s just say you will be stuck with a slow service.

The good news is, even without a threading model Node,js could be used effectively but sadly few of your under 30 coders will probably know anything about how to engineer around the lack of a threading model.

Enjoy the lack of threads… Ruby lovers have for a number of years and you don’t hear anything from them about this either way.

Addendum – threading model

So there kind-of is a threading model for Node.js but only in the grossest manner – so you can fork or spawn a process – big deal !!!   This is not the same thing as spinning-up a thread by any stretch of the imagination.

Node.js is too immature for prime time

You should be able to see the break-out Node.js Web Framework on-par with Django for Python but there only seem to be a ton of choices with no clear winner.

The bottom line is, Node.js is all the rage, this year.  But will Node.js be there next year and the year after with the same strong following once people figure-out just how Node.js works ?!?

Here’s what you can look forward to with Node.js

You will look at Node.js and fall madly in love, no doubt.

Node.js can be very fast but only when you use it sparingly and then only for lightweight processes.

Node.js cannot do any heavy lifting because it lacks a threading model which means you will be spinning-up Processes not Threads and as we all should know a Process object is very heavy, much heavier than a Thread object.

Node.js will be a bit more of a pain to scale unless you want to buy/rent/lease a single server for each Node.js process and then you can go ahead and throw all your money into Node.js servers with my blessing; I will be spending much less on my Python servers and not only because I can pile more services on each Python server…

Node.js is this year’s Ruby on Rails.  *yawn*  So what else is new ?!?

Node.js appeals to non-techies and so does Ruby.

Node.js can be useful but only for those who know how to leverage it properly.

Use Node.js for simple one-off web services and keep Python around for the heavy lifting.

The bottom line

JavaScript is JavaScript no matter how much lipstick you pile on.  JavaScript was designed for the browser.  Get over it already !!!  Yes, you can run JavaScript on your servers – big deal !!!  I can run the Chrome browser on my servers too, does this automatically mean I should be using Google Chrome for my web services ?  I mean really !!!

What’s next ?  Let’s run Adobe AIR servers ?!?   Oh, no, I forgot everybody is supposed to hate Adobe, right ?!?

Let’s keep JavaScript running in our browsers…  Servers are for serious system-level work and this is why god made Python and Stackless Python anyway.

TCP/IP Latency Decoupler

The ability to decouple TCP/IP Latency from that which makes a request to that which serves the request results in a huge (2x) improvement in performance.

The first trick…

First you have to be aware of the effects the latency of the requester has on the overall conversation for TCP/IP and how the requester’s latency tends to cause the response to acquire this latency.

Consider the following…

Let’s say I have a bag full of quarters and you have a handful of pennies; I will give you a quarter for every penny you give to me.  This is kind of like what the Web is like, you make a request for some data using a smaller amount of data and the server gives you a larger amount of data than you sent to the server.

Let’s further say, you must first pull a penny from your hand before you can give it to me and I cannot give you a quarter until I get one from you.  My ability to give you a quarter is coupled with you ability to give me a penny, for as long as it takes you to give me a penny that delay or latency is acquired by me and I cannot respond with a quarter until the latency has been acquired.

Traditional Wisdom…

The Traditional Approach, or one of them, is to reduce the time required for the server to respond and this is part of the solution; the other part of the solution involves something far stranger than you might wish to believe however the latency of the requester can also be decoupled from the server’s response.

Your goal…

You can either continue to use traditional wisdom and reduce the time required for your server to respond or wait for Cisco Systems or Google to discover how to Decouple Latency and then you can pay them to do this for you.

My goal…

My goal is to build an online service you can use to Decouple Latency and then hope you figure-out doing so is useful so I can make this service available to you at a lower cost than either Cisco or Google.

Don’t worry…

There is no need for you, the casual reader, to become concerned with How-To decouple latency, the problem has already been solved and rather nicely using a very simple Ancient Egyptian solution for a related problem we consider in the modern age to be too antiquated to fool around with; the problem with using Ancient Solutions for modern problems is that most people think it stupid to do so because those who lived in ancient times were so backward based on our modern-day understanding – the fact is, however, that ancient peoples were far smarter in some respects than we are today and the Egyptians were second-to-none in numerous respects that we cannot begin to understand even in the 21st Century.

The fact that Latency can be Decoupled successfully is more than enough for those of us who have a deep intimate understanding in the inner-workings of TCP/IP.  I will say I have talked about this with many would-be networking experts who should have known what I was talking about but they did not… but that’s okay because most humans who are alive today are completely oblivious to Quantum Mechanics and still their lives are largely governed by this not-so-well-understood mysterious thing.

I don’t have any problems with those who choose to disbelieve what I talk about… this is what makes life interesting to me, being able to discover solutions others cannot seem to understand … I get to watch how events unfold as I wonder just how long the rest of humanity might take to also discover these solutions.  Cloud computing is one of those solutions I discovered but… the rest of humanity seems perfectly content in using the smallest aspect of what cloud computing could be… Google seems to understand some of what cloud computing could be but even Google has not revealed a very deep understanding into what cloud computing could do to reduce the operating cost at Google.  Agile Development is another of those solutions I discovered but nobody I know of talks about how the Agile Method might be used to greatly reduce the cost of software development; I get to be entertained by the fact that I have been using the Agile Method since I began writing software in 1974 and I had no idea back then that I was doing anything other than what was perfectly obvious to me that was also not at-all obvious to everyone else.  Ah, such is life… some of us have to create the Agile Method while others of us simply use it because it is perfectly obvious to us.  You can put me in a room with a pile of computer parts and then watch me build the computer followed by writing the code for that computer or you can watch almost anyone else walk out of that room without even making the attempt to produce software – it is for those of the latter group the Agile Method was crafted because those people, and you know who you are, lack the motivation to produce software unless they are told How-To produce software.

Now get out there an figure-out how-to decouple latency…  I did and my method works… what about the rest of you… ?!?

%d bloggers like this: