Why does “crappy” code not perform worse than it seems to ?

I like good code like the next person however I am wondering why people spend so much time focusing on “crappy” code patterns that don’t seem to perform worse than expected ?!?

Here’s a thought:  If you want to complain about “crappy” code then why not fix all the crappy code you become aware of including all that stuff you cannot seem to understand !!!

Or maybe just realize the fact that your idea of “crappy” code may be another man’s treasure !!!   Oh wait, someone already thought of this when the following was coined, “…one man’s trash is another man’s treasure…”. Circa. 1570’s !!!

Or to put this another way…   those who feel some coding patterns may be “crappy” might just be limiting their own ability to perceive really useful code they would surely have overlooked !!!

I am NOT in favor of truly crappy code !!!

But, I am also NOT in favor of being so close-minded that I cannot see how useful alternative coding styles may be.

Single instances of “crappy” code cannot be crappy !!!

If you have to produce 1000’s or 1,000,000’s of iterations to see a performance problem from a single use chunk of code then there is very little reason to think about any such performance problem !!!

I might be impressed with those who rail against “crappy” code who also make darned sure all the code they can see in their field of vision is not also “crappy” code !!!

Scenario #1

For instance, consider the following pattern that was recently classified as being “crappy” by an intrepid band of hearty Python coders in an company we dare not name here…

This is “crappy” !

toks = ‘one two three four five six seven eight nine ten’.split()

This is not !

toks = [‘one’, ‘two’, ‘three’, ‘four’, ‘five’, ‘six’, ‘seven’, ‘eight’, ‘nine’, ‘ten’]

The crappy version was said to be “crappy” because the split was seen as being unnecessary because the non-crappy version was said to be the desired effect.

Those who said the crappy version was crappy have probably never had to load a list from a comma delimited jungle of data they may have had to spend several days entering manually by hand otherwise they may have gotten themselves some useful experience as to why the crappy version might be somewhat less crappy after-all.

Scenario #1 Benchmarks

I like benchmarks.  I like to know when code may perform badly at run-time at scale.

The crappy version for Scenario #1 runs 5x slower than the non-crappy version for 1,000 iterations.

The crappy version for Scenario #1 runs 7.6x slower than the non-crappy version for 10,000 iterations.

The crappy version for Scenario #1 runs 7.65x slower than the non-crappy version for 100,000 iterations.

The crappy version for Scenario #1 runs 7.66x slower than the non-crappy version for 1,000,000 iterations.

Um, if you turn-off the Python GC the performance issues seem to disappear for a while !!!  Just a thought…

Scenario #1 Analysis

The real question is this:  “Was the crappy code being used enough for the comments this code pattern elicited ?”  Probably not !!!

The justification for call the crappy version truly crappy was the performance concerns and there are some rather glaring performance concerns to be sure but ONLY when the crappy version was being used 1000 times more than it actually was being used.

Those who claimed the crappy version was “crappy” had to magnify the usage pattern by a minimum of 1000 times before the crappy version’s performance data might be measurable.

I agree the crappy version would be truly crappy if it were the actual source of some kind of measurable performance issue related to the loss of revenues or some other demonstrable effect that was actually causing some kind of problem.

The problem, as I saw it, had nothing to do with how crappy the code pattern may have been because let’s face it this is crappy code if it were to be used often enough for a problem to exist.

The problem, as I saw it, was a group of people all agreeing there was a problem where no problem really existed at-all simply because in their minds they were magnifying the problem by 1000 times just to be able to catch a glimpse of some kind of problem when there was no real problem at-all.

This piece of crappy code may have a real-world non-crappy use case that could have saved someone a lot of time, given the right set of circumstances related to having to maintain a huge data set by hand that had to be loaded into a list at run-time.  The desire to make this crappy-looking code non-crappy in a use case that could NEVER be actually measured as being crappy is the problem !!!  Far more time could have been spent entering all those commas and quote marks just to make the code less crappy than the effort would have been worth.

Why any person or group of people who are supposed to be intelligent talented software engineers would claim a harmless chunk of code was harmful given the actual use-case that existed in the actual source of the context for the original question which was related to a single use of the crappy version is well beyond my ability to comprehend in real terms.

The person who raised the issue was supposed to have more than 20+ yrs programming experience !!!  He found a single reference to the crappy version in a source file that was probably being used exactly once per iteration of some larger program.  WOW !!!  Talk about yelling “FIRE” in a crowded room !!!

The people who agreed with him were even more of a mystery because these people are supposed to be among the best and the brightest at this particular unnamed company and they went along with the idea that was raised by the one guy who should have known better than to yell “FIRE” in a crowded room.

It is interesting to note, these same people who were able to inflate a potentially crappy use-case beyond the original scope are the same people who are seemingly okay with all the following truly crappy coding patterns they seem to wish to do nothing about:

  • An Eager-loading ORM that maintains and uses exactly 1 Database Cursor per Session !!!
    • Why was this NEVER changed ???

I will stop here with the analysis because I think this one point bears further analysis.

How in the world do these crappy detecting software engineers allow an Eager-loading ORM to exist in the first place ???   And the company wants this sort of thing corrected !!!

I have to wonder about the skills these crappy detecting software engineers actually possess when they cannot find a way to remove the Eager-loading ORM in the first place !!!!

Removal of the Eager-loading ORM would be easy enough, for me to accomplish, but then I can tell the difference between crappy code that is really crappy versus crappy code that only seems to be crappy.

Well You See Timmy…

Now for the moral of this tale…

People who live in glass houses always seem overly eager to throw stones when their own houses have cracks in the walls so wide everyone knows there are problems.

I have no issues with people who can see imagined problems that don’t exist so long as they have their own houses in order but this was not the case in this instance.

These very same people, who seemed more than willing to detect crappy code patterns where there was no crappy code use case are the very same people who seem unwilling or unable to resolve glaring performance issues in a huge pile of code.

The rest of the issues these people could focus on are as follows:

  • mod_python rather than wsgi
  • Django not being used-all but then there is no discernible web framework being used at-all.
  • Eager-loading ORM – easy to resolve with Django.
  • Non-scalable Web App – because mod_python is being used rather than wsgi, for instance.
  • Development environment issues – all developers share a single instance of the run-time – each developer gets a different virtual host but all development being done in a single Linux instance.
    • Okay, this one truly baffles me !!!
    • How difficult can it be to get each developer their own Linux instance at a moment in time when everyone has a Cloud-based solution for doing this ?!?

Look, all these issues can be easily handled but none of them are being handled at-all.  Why ???

The reason(s) all these glaring issues are not being handled is easy… lack of experience and lack of skill in the developer community.

Nobody wants to make any real changes or nobody is able to make any real changes.

Dragging along ancient code from the deep past and then being either afraid to update it or unwilling to update it is more than ridiculous !!!

Solutions !!!

The solution for all this is also easy but very difficult to implement !!!

Rewrite your code every 18 months !!!

Better tools are being churned-out all the time.

Django is a proven Web Framework !!!

Wsgi is a proven technology stack !!!

Python+Django+tornado+wsgi+nginx equals a scalable Web App that scales as easy as you can build an automated process for spinning-up one more Linux Virtual Machine in the Cloud !!!

Or let’s put this another way…

Python+Django+tornado+wsgi+nginx was easy enough for me to handle all by my self – not that I might not have wanted to do this with a team of others – there just weren’t that many others I might have done this with.

The moment I achieved my first stable installation of Python+Django+tornado+wsgi+nginx I knew it was the way to go !!!

Python+Django runs as a separate wsgi web server with performance comparable to what you get from the Google App Engine, oddly enough, and “yes” I have run benchmarks that tell me this based on the data.

Tornado is a stand-alone Python-based Web Server with very good performance characteristics.

Tornado talks to an instance of a Python+Django web app via wsgi.

Nginx talks to an instance of Tornado that talks to an instance of a Python+Django web app via wsgi.

Why so many web servers in this stack ???

Why use Tornado at-all ???

I happen to know a little something I call Latency Decoupling that tends to make web pages serve much faster the more layers of web servers you use.

Nginx connected to many Tornado servers each connected to one or more wsgi Web Apps is far more efficient serving web content than Nginx connected directly to that very same wsgi web app.

Latency Decoupling kicks-in and your end-users have happy faces.

Ability to Scale the Web App also increases !!!

Many instance of the Web App within each Tornado instance !!!

Many Tornado instances within each Nginx instance !!!

Deployment gets easier !!!

Now with a single Python or Ant script you can spin-up yet another Amazon EC2 instance – connect-up the Nginx instances using a Load Balancer of some kind (nginx also does Load Balancing) and before you know it you have architected a really cool Django Cloud Solution that nobody else seems to have just yet.

Build a slick Control Panel for your Django Cloud Users and bingo you have the ability to grow a Web App from a single instance to any number just by clicking a button on a web page !!!

The only other detail would be how you monetize all this into something you can use to generate revenue.

All of this was easy enough to build when you have all the parts.

All of this should be easy enough for any company to use, if only they had I.T. staffers who had played around with these kinds of solutions but alas that seems to be lacking in most companies except for a few.

Too bad most would-be skilled programmers would tend to scoff at most of what’s written in this article as being some form of “crazy”… but then once upon a time the notion of generating electricity was also seen as being “crazy” along with the notion of gravity and quantum physics.  I can live with what others wish to say… so long as I get to build something really cool along the way.


Secure Anonymous P2P is coming…

100% Secure …   1024 – 2048 bit Blowfish Public Key Secure !

100% Anonymous … Nobody knows who you are and you never meet anyone you get or share files with.

100% P2P … True Peer to Peer, no middle-man other than Twitter.

This is everything Bit Torrent cannot be !

Bit Torrent is NOT secure !  Law Enforcement can eavesdrop and discover the files you are sharing or downloading.

Bit Torrent is NOT anonymous !   You can be tracked when using Bit Torrent.

Secure Anonymous P2P allows you to search for the files you want to download from those Peers who have the files.

Secure Anonymous P2P uses a Secure VPN that connects both Peers to each other.

Secure Anonymous P2P provides a built-in Twitter blaster that tells the world about your files.

Secure Anonymous P2P has a desktop client and an Android client, both are in the works.

Google App Engine + LAMP (Python/Django/FastCGI) = Rock Solid Always On Service Platform

Front your business with the Google App Engine

What could your business do with an extra 1 million hits per day and 1 GB of Transfer per day ?

This takes the load off your own back-end servers while providing your business with a rock solid always on front door, so to speak.

Your valuable customer data is not even stored in the cloud, in this model, it is however stored on your own LAMP Database Server running MySQL 5.1.

Your customers see your content through the lens of the Google App Engine.

Memcache handles the transfer of your content sitting on your own LAMP server so that your content does not get pulled from your server more than once per version, for instance.  Your Google App Engine App could either pull your content across the wire on-demand as-needed or optimistically via a background Cron job.

Django 1.1.1 running in both your Google App Engine App and your back-end LAMP server means your GAE (Google App Engine) App can cache real data objects from your back-end server using JSON as the transfer medium so that when a GAE Object does not have the data the back-end server is polled for the data using a REST Web Service and the data is pulled across the wire as compressed JSON using ZIP (GAE knows how to handle ZIP files however LZMA or some other form of pure-Python compression technique could also be used to further reduce the data being transferred across the wire from your back-end LAMP server).

A Real World Scenario

Let’s say you are paying something like $150 for a LAMP server that has 10 Mbps of unmetered transfer per month.  This gives you something like 86400 MB per day or 84 GB per day of transfer.

GAE gives you 1 GB transfer per day for FREE per app and you get 10 apps for FREE for an aggregate total of 10 GB transfer per day for FREE.

GAE extends your own LAMP server by giving you 25% more data transfer per day at no cost.

GAE extends your own LAMP server by giving you an always-on (except for maintenance periods) front-door for your single point of failure which is your single LAMP server.  If you cache your content in an intelligent manner your can perform maintenance on your own LAMP server without interrupting your customers other than to disallow logins, for instance, assuming your own LAMP server does nothing more than serve your content and handle your paying customers after they login.

Leverage Flex 4 (aka. Flash Builder 4)

Front your SaaS (Software as a Service) offering with Flex 4 using some Flash tricks that makes it virtually impossible for people to see your valuable SWFs via a SWF Loader Container that keeps your real SWFs away from the browser’s cache which means nobody can reverse-engineer your real SWFs.

Now you have an RIA (Rich Internet App) using some Flash tricks.

Your RIA off-loads processing to the client to the extent possible.

Your RIA App uses JSON and REST Web Services for simplicity and performance.

Your RIA App funnels all requests through the Google App Engine to the extent possible.

Your RIA App maintains the user’s session locally rather than on the server.  This means your paying customers can login using any server or cloud with the actual Authentication being handled by your back-end LAMP server using your own SSL Certs you did not even have to pay for because you created them yourself.  GAE Apps can use HTTPS as of October 2008 as as SWF-based Apps tend to use little bandwidth assuming Flash Builder 4 is being used along with RSLs for the Flex Framework your customers can see that your site is using SSL via HTTPS all the while your Flex 4 App hides the fact that your back-end LAMP server is being contacted using HTTPS either directly from the Flex 4 App or indirectly via your GAE App; you could just as easily use your own form of pure-Python encryption such as some type of SHA using Flex 4 running on the client.  Passwords are typically encoded using SHA making the password stored in your database nice and secure.

Make it Secure

As an option, your RIA Flex 4 App can pass encrypted data from your customers to your back-end server via your Google App Engine App and vice-versa.  The data that changes would invalidate the GAE cache of such data on a per customer basis via memcache.

Make it Intelligent

Leverage the heck out of Google App Engine to the extent possible.

Let your Flex 4 RIA intelligently begin by-passing the Google App Engine whenever it knows your back-end LAMP server is online and you are about exceed your GAE Limits for the day; your back-end LAMP server probably provides more data transfer per day than your GAE (10) Apps can.

Your Flex 4 RIA can intelligently know when your back-end LAMP server may be down for maintenance and when the GAE is down for maintenance; by-pass the GAE when it is down for maintenance and by-pass your LAMP server when it is down for maintenance.

Your Flex 4 RIA can intelligently allow you to expand your back-end LAMP server farm by knowing which servers to hit and when; your GAE Apps can also handle this task.  This allow you to grow your business as your business grows.  Make more money and add another LAMP server in the form of a cheap VPS or a dedicated server – makes no difference how this is done – your front-door keeps on humming along as though it is all one really big expandable elastic cloud because this is what this would easily become at very little cost to you and your businesses bottom line.

Make it Elastic

Go ahead and leverage the Amazon Cloud or the SalesForce cloud using the Google App Engine as the front door.  Off-load processing as-needed to lower the cost of your own data processing without having to buy your own data center.

Make it use Lua

Yes, you can use Lua from the Google App Engine so long as Lua is running on your own back-end server you can use Lua all you want along with Python running in the Google Cloud.

Make it use PHP

Yes, you can use PHP from the Google App Engine so long as PHP is running on your own back-end server you can use Lua all you want along with Python running in the Google Cloud.  You can also run PHP via Java in the Google Cloud but that seems like a longer way to go to get PHP to be more scalable.  Let’s face-it, PHP is far less scalable than Python or Lua for that matter.  Python uses FastCGI but then Python is more multi-threaded than PHP and Python has a threading model that just works better than PHP or Java for that matter.

Grow your Business on Pennies

Let Google pay for all those servers you don’t need or want to buy.  Google has more money than you do anyway, most likely, so why not let Google pay what would be your electric bill.  A single server could easily cost up to $100/’month just for electricity depending on the configuration of the server and where and how it is cooled as well as other factors.  Google tends to use low-cost low-power servers but let’s face it Google has money to pay for such things so let them.

VPS are pretty cheap.

Free Hosts are also well FREE.

Dedicated servers tend to be a bit pricey depending on the data transfer you want to buy.

By leveraging the heck out of Google to the extent possible you could either simply use the Google App Engine entirely for FREE, assuming you have some desire to do this as you are creative enough.  Or let Google front your business for cheap.

You might not need to pay anything more than let’s say $150 to $200 per month for a LAMP server with 6 GB of RAM and something like 150 GB of disk space.  Make efficient use of how you store and transfer your content and you need never worry about what your cost might be for your own high-powered “cloud”.

Vyper-CMS™ Static Content Optimizations Yield 200% Performance Boost

Vyper-CMS™ Static Content Optimizations Yield 200% Performance Boost

(FYI – The link presented here in the text of this post takes you to the original Blog Article… Click it, always click the links…)

Vyper-CMS™ is a Prime Investment Opportunity

Vyper-CMS™ is a Prime Investment Opportunity

(FYI – The link presented here in the text of this post takes you to the original Blog Article… Click it, always click the links…)

Vyper-CMS™ 1.0 is Online and Open for Public Use

Vyper-CMS™ 1.0 is Online and Open for Public Use

(FYI – The link presented here in the text of this post takes you to the original Blog Article… Click it, always click the links…)

Vyper-CMS™ gets Static Content Optimizations

Vyper-CMS™ gets Static Content Optimizations

%d bloggers like this: