Wednesday, December 29, 2010

What is .NET ? - The Wrath of COM

“What is .NET programming?" : Object Trek: The Wrath of COM"

A Brief History of .NET Programming  

The Adventure Continues  -->

To best use .NET, one needs to understand how it came to be, the niche that it filled and replaced, and the purpose that it currently serves today.  Computer coding styles and languages are constantly evolving.  Most of the evolution is driven by user demand and one upmanship among software vendors, but a significant portion of it is driven by necessity.

"Object Trek:  The Wrath of COM"

During the height of the PC Clone Wars in the late 1980s, there were manufacturers who attempted to clone one or the other of the industry leaders, the IBM PC and the Apple Mac.  Each system offered something the other did not, but Apple initially offered something that seemed that the PC could not have at all.  Apple had introduced the GUI.  As discussed in the “History” blogs, a GUI required new software architecture, one that was able to perform true multi-tasking.  This was implemented in the OS through the use of internal messaging, which in turn invoked interrupt handlers.  PC-DOS was not a multi-tasking OS.

Microsoft added a Messaging Layer to PC-DOS with Windows 1.0 to implement the required vectored interrupts needed when it was introduced in 1985.  The first version of Windows was not as slick and smooth as the MAC.  It didn’t look as good, but you could run more than one program at a time.  Integrating a true Messaging Layer into MS Windows did not come until much later when the move away from a DOS towards a true GUI began in the mid-90s.

One thing the MAC could do that MS Windows could not was integrating one application with another.  This means importing a spreadsheet into a word processor document only once so that if the spreadsheet updated then so would the word processor document.  We take this luxury for granted today, but it was not always the case.  The MAC had the edge of using software written by Apple to specifically do so.  Lotus 123 and WordPerfect running on the PC could not do it quite as well, as neither of which was written by Microsoft.

1987’s Windows 2.0 saw the introduction of DDE, Dynamic Data Exchange.  This breakthrough introduced the technology to allow Windows applications to exchange data dynamically.  The introduction of interprocess communication coincided with new releases of Word and Excel that allowed for the dynamic exchange of data just like a MAC.  Almost.  The active windows were tiled, no overlapping was allowed.   Legally speaking, that feature was supposed to be unique to a MAC.

At the heart of DDE was something known as OLE, Object Linking and Embedding.  DDE technology can still be found in use in recent releases of Windows.  Windows 2.0 still ran under DOS, and most applications of the time ran under DOS.  Most software written for Windows came from Microsoft.  It seemed that most other folks didn’t know how, nor wished to bother with re-writing major applications.  Especially when it looked like Apple would win its’ copyright infringement suit against Microsoft.  “Bye, bye, MS Windows.  A MAC can overlap windows and Apple copyrights prevent you from doing so.”  Wait, not so fast.  Microsoft eventually won.  Though they lost 4 years; Marketing would make up for it.

The year 1991 was a landmark year in computing.  A newer politician to Washington named Sen. Al Gore, Jr. set up the legal stage for an Information Super Highway that introduced technology that was to be “in the public domain forever”, as the legislation read.  Microsoft introduced Windows 3.0, which was a dramatic improvement over any predecessors.  Microsoft introduced Visual Basic 1.0, which was a dramatic change of direction over BASIC.  Both of these products relied heavily on a newly introduced software technology known as OLE, Object Linking and Embedding.  But, Windows 3 was not exactly perfect. 

1991 saw Visual Basic 1.0 introduced to provide a development environment for true Windows based programs so that a transition away from DOS could begin.  Most existing applications ran under DOS.  Many simulated a GUI with “windows” created from common text characters.  The applications were developed during the heat of the legal battle with Apple, so folks did their own thing.  No one wanted to write against code that had a real possibility of being yanked from hard drives and sent to the old obsolete code home.  Many of those applications looked rather crude and pitiful compared to a MAC, which made many of them seem grossly overpriced.  Visual Basic also introduced the concept of a DLL, a Dynamic Link Library, to the developer. 

At the same time, Microsoft released an Application Framework Extension, nicknamed “afx”, but it was better known as MFC, Microsoft Foundation Classes, for C programmers.  MFC was an object-oriented wrapper for the Windows API, and required C developers to use the newer extended version of the C language known as C++.

Just when C developers were getting used to the extensions to run under MS-DOS--- which introduced the name change to C+ ---Microsoft added more to their plate with an entire host of new extensions to use with MS Windows and renamed it all C++.  Good name that “++” part.  It was quite literal.  It was huge.  Some folks didn’t have enough space on their already crowded hard drives for it.  Marketing.

The most annoying change introduced by C++ was the renaming of the most basic and sacred of the C library files from “std.h” to “stdafx.h”.  They broke your code base!  Now you had to go back in and fix everything just to use C++ with your existing code base.  Some folks wanted to drop writing for the PC.  Problem was they couldn’t, because there were too many potential customers to sell to.

1993 saw the introduction of Windows 3.1 and Windows for Workgroups, which reduced the number of crashes because of improvements upon OLE.  OLE 2 incorporated a new abstraction in the form of a new concept, an object model known as COM, Component Object Model.  1993 also saw the introduction of Visual Basic 3, VB3, which had a big improvement over VB2 introduced a year earlier because it included the Microsoft Jet Database Engine that could read Access database files.  When asked about the “2” designation, Microsoft dropped it saying it was still the same technology.  Marketing.

The majority of PCs in use were beginning to shift from 16-bit designs to the newer 32-bit designs by the mid 1990s.  VB4 was released in 16-bit and 32-bit versions and neither version was exactly 100% compatible with the other.  Really.  VB4 marked a significant change in the controls used on windows forms.  Changing from DOS rooted VBX, Visual Basic Extensions, to OLE enabled controls that used the “ocx” file extension.  The Windows Operating System was evolving, as was the development software.  Windows applications were coming out that actually had a UI driven by the hi-res graphics instead of a cheesy-clever arrangement of text characters running under DOS.

Microsoft made up for the VB4 debacle by introducing VB5, which produced 32-bit code exclusively.  VB5 could import code produced by either version of VB4, 16-bit or 32-bit.  VB5 saw user created custom controls based upon OCX, but created through a separate application.  Even though custom controls retained the same technology and file extension, you now had to call them ActiveX Controls. 

I think they introduced something called Active Desktop around this time, too.  Names must have been a marketing thing, not my cup of tea, and thus not for me to understand or question.  Windows 95 was original, so I thought.  That was better than the 1.0, 2.0, 3.0, 3.1 stuff, which they had been doing.  Windows 7, anyone? 

Active Desktop added dynamic HTML content from the Web to your desktop.  It also consumed too many resources and provided an excellent pipeline for viruses.  It was a failure.  Turn it off.  But, it also made a good argument in defense of Internet Explorer being an integral part of Windows when the U.S. government had sued Microsoft for monopoly violations concerning bundling IE with the OS.  See?  It was a marketing thing, just like I said.

Microsoft’s response to the explosive growth of the World Wide Web during the 1990s for developers was with the introduction of VB6, which added the ability to create web-based applications.  VB6 was considered by some to be the pinnacle VB programming.  It could do it all.  You could do anything that a C programmer could do.  There was nothing to hold you back except a nightmare something or other known as DLL Hell.

The introduction of the dynamically linked libraries had proven to be a rather mixed blessing.  Those wishing to write applications that specifically ran under Windows were able to access some of the same binary assemblies that the OS used.  Windows 1 and 2 saw a lot of “windows-based” applications that were not true GUI applications.  The applications did not use graphics and ran under DOS, just like MS Windows.  As result, there was an across the board lack of consistency with the appearance of a “window” from app to app to OS.

Granting access to the same GUI software to generate true graphics instead of text, gave commercial developers a means to create applications with a consistent look and feel.  The MAC had been doing this properly for quite some time.  But, this access and privilege also created a new problem that had numerous causes.  Applications could no longer find the correct DLL from time to time.  PCs began crashing to the floor like drunken sailors.  The cynically accurate expression “the blue screen of death” came into being.

Commercial applications were written against a specific version of a DLL, and would install a copy of it when the application was installed.  If the OS had been updated, the newly installed DLL would “roll back” the OS and cause it to crash.  The problem also worked the other way.  A DLL could be installed that was newer than what your installed software used.  The new DLL may not even contain the methods anymore!  The OS was not able to allow for multiple copies of the same DLL, either.  This was a problem in more ways than one.  It was grave security risk that hackers took advantage of all too easily.

There were numerous hacks and work-arounds, some sanctioned by Microsoft.  Others were not, but were inspired by a near desperate sense of self-preservation.  One semi-permanent solution that was immediately apparent could be seen in the success of the JAVA “virtual machine” model.  The JAVA VM provided a much-needed layer of abstraction and management between the application software and the actual hardware.  Folks call this layer in an OS a Kernel.  Microsoft implemented this solution in the form of .NET.

Providing a layer of abstraction that was managed by the “OS” was what Microsoft had decided to go for.  This would require that developers write code against some form of a “virtual machine” as JAVA did.  The end result was the .NET Framework Class Library, FCL, and all of its’ associated components, most important of which is the CLR, Common Language Runtime, that ran the whole show when it came time to run an application.

.NET also introduced a new language that had a lot of the look and feel of C++.  The new language was C#, which is pronounced as “C Sharp”.  C# bore an even stronger resemblance to JAVA, however.  C# was designed to force developers to be explicit in code.  One controversial aspect of VB is how user friendly and helpful it can be.  The help is a mixed blessing.  Everything comes with a price.  That and other differences come at a price that some are avidly eager and willing to pay, and some are vehemently against.  It’s touchy subject to say the least.  Who would have thought that ideological politics and religious fervor could creep into something as benign as programming?  I decided long ago to blame it on Marketing again and call it day.  It works for me.  Blame the sales people because I already know that they are totally innocent.  Just ask one.

In the years since its’ first introduction, the FCL has become commonly referred to as the Base Class Library, or BCL.  Consider the two terms to be entirely interchangeable.  .NET and the FCL/BCL did not really change how Windows worked, but rather it changed how programs worked with Windows.  The FCL is just a rather large collection of assemblies, which could be selectively referenced, that is essentially a set of wrapper classes for the most used content found in the older MFC and COM libraries used in the past. 

In fact, .NET code can bypass the FCL entirely, through a process known as Interop, and call the old unmanaged libraries and assemblies directly.  Most of the older libraries still ship with the latest versions of Windows.  But, that will not continue forever.  At some point in the future we will find those assemblies being dropped from Windows one by one until they are all gone, or at the very least no longer accessible for Interop in their present form.  At least, that’s the way I would do it moving forward.

Rudy  =$^(

I dedicate this blog entry in memory of Jeff Spacek, on this day, Aug. 31, 2010.  (edit: Dec. 29, 2010 at the new blog web site.)

I had originally intended to include this installment in my “Brief History of Code” series of code blogs as the final chapter, but quickly decided against it for a couple of reasons.  I thought this was just a bit to far off-topic from the more generalized commentary in that series of blogs.  This discussion was a topic that is more platform, computer language, and OS specific. It didn't seem to really fit the broader view.

The “History” blogs are intended to be an informative and entertaining commentary that masks their true purpose of being a well-disguised glossary.   I also realized that I was having so much fun writing it, that it was quickly becoming quite long.  Many people have often suggested to me that I should have been a novelist and it was showing.  I really tried to be brief.  So on that note, I wrote a new "Part 12", which is itself quite brief that you can now see posted.

The final and main reason rings on a more serious note.  I was inspired to start a blog because of the meticulous effort put in by an individual in his own blog whom I had only known superficially through the MSDN forums.  I never met the individual in person, although I had traded a couple of emails.  His goal was similar to mine, which was to help absolute beginners in programming to get up and running.  

Jeff made an impact on all who read his blog no matter your level of programming expertise.  At least I know it made a significant difference in me.  It spurred my mind to imagine the possibilities.  Jeff made me realize that I had had a venue to continue my love of teaching in a whole new way.  I was writing and editing this installment when I learned of his tragic passing.  Within a day or two I realized that it was time to move on from the History blogs. and continue Jeff's goal.

Rudy  =$^(

A Brief History - 12 - The Undiscovered Code

PART 12: Object Trek: The Undiscovered Code - Into the West

"A Brief History of Code", by Rudedog Hawkins

PART 12: Object Trek: The Undiscovered Code

Frodo Baggins said to Sam Wise, “The next chapters are for you , Sam.”, as he boarded the ship to sail "Into the West".

Rudedog Hawkins   =8^D

A Brief History - 11 - The Search for OOP

PART 11: Object Trek: The Search for OOP

"A Brief History of Code", by Rudedog Hawkins

PART 11: Object Trek: The Search for OOP

In the early 1990s, a group of four software developers got the bright idea that they alone could figure out a working theory about how to apply these object concepts in some sort of structured way.  Their efforts to develop a theory did not meet with immediate success, which was no big surprise.  They were a David trying to slay a Goliath, which they could not see.  Many experts had no doubt approached this problem over the previous 30+ years, including the very same people who had first conceptualized them. 

Their initial mistake and major stumbling block was looking for some sort of definable structure where in fact there was none to be found, at least not of the sort they had sought.  But early on, they did make one crucial realization as to how to fit it all together.  “Keep It Simple, Stupid.”  When they could not find any structure or methodology, they kept true to that mantra.  They decided to go back to the roots of how software objects began, which meant imitating life, imitating the human brain.  Once again, they began to look for patterns; trying to define and impose what they figured would evolve from an arbitrary structure to into a coherent one.  It still didn’t quite fit.  It still looked like leftover hash, without any rhyme or reason.

And so, they began to look for ideas outside of the computer industry.  This was not an act of desperation, but rather what some would call almost divine inspiration.  There are many branches of arts and sciences that are seemingly unconnected with one another.  One thing that binds these disciplines together is the mathematics.  In many cases, the identical mathematics even crosses into other disciplines.  For example, the same formula that calculates interest on a bank deposit is also used to calculate population growth of bacteria in a Petri dish.  It didn’t take long before their search bore fruit.

Legend has it they were sitting in a bar looking over a book about Modular Architecture Design and Construction, and became fascinated with the concepts.  They initially loved the abstraction of it all.  The proverbial light bulb was lit.  Imagine the toy blocks called Lego’s.  Most any toy block can fit and work with most any other piece.  Why?  It works because each piece of Lego’s has a standardized interface for interlocking with other Lego’s. 

Of course the book was far more sophisticated than Lego’s, but the ideas for their legendary book that came a short time later were born.  Building modules could be made so that they contained sub-modules, sub-modules that served one or more purposes. They realized that what an object was made of could be far more useful than what it was made from.  They discovered that how an object was designed could hold greater significance over how it was made.  In OOP terminology, this mindset is phrased like the following.

“Favor composition over inheritance.”

In Lego’s terms you could look at it this way.  You could snap any two pieces together to form a greater whole that still conformed to the original design description.  That sounds like reusable code.  There was no requirement that you had to connect a blue piece to only another blue piece.  That sounds like polymorphism.  You really were not even required to use two pieces of the same material.  As long as the two pieces met a certain shape and size specification, they could work together.  That sounds like abstract type descriptions.

The more complex architectural elements had additional components that defined the way that elements interacted.  They drew parallels to most of the major Object Oriented concepts known at that time.  They saw Polymorphism, Inheritance, Classes and Typing, and even methods and message passing. 

The modular architectural concepts could have been categorized into categories that represented Structural, Creational, and Behavioral traits.  There were some elements that formed that support and structure of the final building.  Factories existed to create the custom modular elements based upon a set of specifications.  Once elements left the factory, they could have their uses significantly modified before or after installation, thereby entirely altering the behavior of an element.

The parallels to Object Programming were undeniable.  Their final book had also laid out the groundwork for some rules of use of the modular elements based upon certain design objectives.  The brainstorm was set, in motion, and a short time later their landmark book was released, “Design Patterns: Elements of Reusable Object-Oriented Software”. 

The book's authors were Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides with a foreword by Grady Booch. The authors are often referred to as the Gang of Four, or GoF, for short.  The book provides code examples of 23 basic Design Patterns, and lays out some ground rules and goals on how Object-Oriented software should be written.  They defined the OOP mindset.

Unless you are a collector, a purist, a C programmer, or someone who just simply likes torture, I would not recommend the book to any new developer.  It was originally targeted at the highly experienced C programmer, and remains so to this day.  As it was written nearly 20 years prior to this writing, many will find the examples to be quite outdated.  There are many contemporary texts that use code examples in the .NET languages in common use today, particularly C#.

Now that a set of ground rules and design philosophy were laid out, converts from Procedure-Driven code were jumping onto the bandwagon like it was a new religion.  Object-Oriented Programming is widely used today, but still misunderstood by many.  It has created a new rivalry in some circles.

Procedure-Driven code has not faded in the least.  Many databases in wide use today are relational databases, which were designed during the golden era of computer growth long before OOP became anything more than a curios piece of software abstraction.  In fact, it is making a comeback under the guise of structural programming.  Some say that the future of programming just might be that type of structure.

Rudy  =8^D

A Brief History - 10 - The Revenge of the Hacker

PART 10: Silicon Wars: The Revenge of the Hacker

"A Brief History of Code", by Rudedog Hawkins

PART 10: Silicon Wars: The Revenge of the Hacker

By the early 1990s, the term hacker had evolved in meaning.  It had turned from someone who was viewed almost as a folk hero of legend, to someone who was out to cause as much terror and havoc as they could.  Some hackers began to take actions that were undoubtedly immoral, but because the industry was so new that the hackers’ actions could not be deemed illegal.  Yes, I am talking about computer bugs, which led to computer viruses.  Not much software is perfect, if any.  The more complex it is, the more likely it is to have an error within the source code. 

Internal software errors are known as bugs.  Bugs are supposed to be accidental.  Some bugs were not.  Some hackers began to sabotage their employer’s software because they felt underpaid.  Other misguided individuals wanted to look like a hero to those running their companies by solving a major crisis.  I have seen this first hand.  Bitter workers, who saw friends lose jobs in the name of greater company profit, intentionally created bugs to harm the reputation of the software vendor.  It was rumored that one vendor had wanted to sabotage the product of a competitor created some bugs.  They all had a reason that felt valid enough for them.

Once caught, the bitter worker could feign ignorance and claim an innocent mistake.  Since there was no way to prove malicious intent many of those people soon found themselves out of a job, which made them bitterer than ever.  Then bugs took on a new form.  Instead creating bugs to attack their employer’s product, bugs were attacking other vendor’s products.  The bad software was behaving just like a biological virus spreading to other software packages, and so they were aptly named. 

Bugs are due to code found within the application code itself.  Viruses are due to code not part of the original application code.  But, writing these types of directly focused bugs and viruses is not how some bitter hackers really got their revenge.

The year 1991 saw the passage of legislation in the U.S. Congress to create something to be called the “Information Super Highway”.  It was an idea that had found it roots in the early 70s U.S. Dept. of Defense project code named DARPA.  DARPA was to produce a logistical network to interconnect the different branches of the U.S. Defense Department’s computer systems that had been purchased from a variety of vendors. 

At the time each vendor had proprietary networking technology to lock you into purchasing only their products, and so systems from different vendors could not communicate.  The U.S. government could not buy exclusively from one vendor, they had to buy from the lowest bidder.  The hybrid network had been envisioned as the information equivalent to the interstate highway system created after World War II, which allowed vehicles of all shapes and sizes to move about freely and quickly without local interference.

The networking technology being created by DARPA was called Arpanet.  It was 20 years ahead of its’ time unfortunately.  1970s technology was not up to the task, it ran over budget, nearly cancelled, and was eventually used on a reduced scale by government-sponsored scientists.  For almost a decade, Arpanet had been the government-sanctioned near-exclusive domain and plaything for scientists and researchers exchanging nuclear and weapons research data and papers across the U.S. 

These researchers and their co-workers would go home and wish their home PCs could talk with other computers and exchange mail messages in the same way.  It was illegal for them to bring the work technology home because in many cases the scientists were working on top-secret Dept. of Defense research.  One central hub for nuclear weapons research was being performed in Oak Ridge, Tennessee.  Tennessee had a young Senator by the name of Al Gore who had some long time friends from college who worked at Oak Ridge and other locations.  Some of them presented their desires to the young Senator.

One of Senator Gore’s major accomplishments during his tenure in the U.S. Senate had been to sponsor legislation that would allow these researchers to use this computer network technology at home to communicate with their friends across the country.  In this way, no one was sending personal messages on secure computers, and the government research network could remain secure. 

Sen. Gore could not make it legal just for a hand picked group of friends, so he made it available to the general public at large.  Computer modems were in wide spread use at that time, but they allowed connection to only a single bulletin board at a time.  Switching boards was slow and time consuming.  That meant re-dialing and running the risk of not finding an open phone line to the board.  Data transfers were limited to simple text.  The new technology could put those inconveniences to an end.

Sen. Gore’s 1991 sponsored legislation that was known as the High Performance Computing and Communication Act, which mandated putting the communication backbone technology created by the Defense Department into the public domain.  It was expressly prohibited for anyone to own and control, or otherwise limit public access to this new communication backbone. 

This last was deemed as crucial to spurring growth because you could throw your hat into the ring with very little investment.  It mandated a permanently, leveled playing field, and defined the playing rules as every man for himself.  The legislation contained content that fell in line the economic policies of both those on the liberal far left and the conservative far right.  The law favored no individuals or groups because everyone had an equal chance.  There were no rules in place limiting your options leaving you free to do whatever you wanted.

The legislation also commissioned the creation of a high-speed information super highway, and created additional funding for the existing 4 NSF National Supercomputing Centers around the U.S to play a lead role with the DARPA technology.  These cities were home to universities that had played little known but critical roles in the development of the Arpanet and parallel technologies during the earliest stages of the project.  These universities were: Carnegie Mellon in Pittsburgh, PA; University of Illinois in Urbana, IL; Cornell University in Ithaca, NY; and the University of California at San Diego.

While the young Senator did not invent the technology, he certainly had a leading hand in creating the political atmosphere for the rise of the public domain Information Super Highway that we think of today as the Internet.

Once hackers discovered the Internet, they realized that they had an entirely new method to deliver computer viruses to attack the software of companies they disliked.  Their favorite target was Microsoft who just had so many products to choose from.  Some hackers would create viruses just for the thrill of seeing the headlines that a really damaging virus would create.  Some hackers created viruses with the intent to make illegal profits by stealing secure information.  Others simply stole CPU time from unsuspecting victims.  The motives are as endless as the variety of viruses.

Today, there are dozens of independent high-speed networks in the U.S. that resemble the Internet, but most are strictly for private or secured use.  The 4 NSF sites are still providing cutting edge research and development to the computing industry and are credited with driving the growth of the Internet seen in the 1990s.  They also serve as self-redundant data storage centers for academic research providing at least 10 petabytes of hard disk storage at each site, a function similar to that once served by Arpanet.  One petabytes is equal to one billion gigabytes, 1000 terabytes, or 10 thousand 100 GB hard drives.  A little more than half of the 10 PB capacities are currently used.

Rudy  =8^D

A Brief History - 9 - The Return of the Hacker

PART 9: Silicon Wars: The Return of the Hacker

"A Brief History of Code", by Rudedog Hawkins

PART 9: Silicon Wars: The Return of the Hacker

Now these AI enthusiasts were also the type of hacker that saw the symbiotic relationship between System Specialists and Application Specialists.  Most of the time they were one and the same.  All software engineers of that day held some interest in AI to some degree.  They tinkered and dabbled with the mundane software that could allegedly demonstrate “artificial intelligence” of one sort or another.  But, AI software was written differently from most code that was being commercially produced.  It was weird and hard to understand on first look.  The best of it was designed to operate in an interrupt driven environment, reacting to outside inputs and stimuli.  That was what microcontrollers did best, not CPUs.  Most microcontrollers had a pin for hardware interrupts.  The Intel designs used in the original IBM PC primarily used software interrupts.

By the mid-to-late-80s, Graphical User Interfaces, GUIs were the wild fire in the computer market.  Software written for a GUI---whether it was the OS or an application---had to be written a little differently.  Instead of running through a set of instructions start to finish, software engineers noticed that the code had to be written so that it waited for the user to do something and react to it.  This bore a remarkable similarity AI software, which was also written a little differently.

Some GUI developers began to take a closer look at software architecture of how some of the AI software was written.  The interrupt driven environment was a perfect match for the newer GUIs.  An interrupt driven environment was designed to be one where the CPU sits idle until it receives an external signal on one of the pins of its’ IC chip, such as on a hardware interrupt CPU.  A software interrupt CPU could react to specific instruction being executed at a fixed memory location, so it was thought to be impossible to for it to just sit and wait.  Hackers cheated and found a solution.  They faked out the CPU by writing OS code to send periodic messages to the OS.  The message content was used to initiate the appropriate software interrupt.

A CPU could have numerous interrupt sources.  These sources could be a disk drive, the keyboard, the mouse, a serial port, etc.  Depending upon what the signal/message is, the CPU will execute one subroutine or another.  These subroutines were initially known as interrupt handlers.  Interrupts were ideally suited for the multi-taking environment that consumers desired and dreamed of.  Today, these message driven interrupts are known as events in .NET.

Interrupt driven microprocessor designs had been around for a decade. They all had interrupts in one form or another. The typical commercial code that was written for them followed the typical procedure-driven patterns of the day.  The best AI software did not fit those classic patterns, but it was very good in an environment where it could not be accurately predicted what the program might need to do next.  A complete pre-written list of procedures and instructions was out of the question.  A program had to be ready for anything the user might do next.  It had to react to a variety of inputs that could each vary in a wide variety of ways.  Just because it worked well did not mean that it was well written.  Procedural code was lacking a bit in its’ capacity to react dynamically to interrupts.

Some computer languages were already starting to incorporate some features that were suitable for AI programming and dynamic interrupts.  Most of these features and concepts had been around in one form or another since the 1950s when the first specialized lists were conceived.  The problem is that no one had yet to come up with a well-defined methodology for putting these concepts into practice, which meant teaching someone how to write it was difficult. This is why a lot of AI code resembled spaghetti. 

AI Code that was being written with software objects was written with a procedure-driven mindset.  The results were convoluted code that was hard to write, harder to understand, and near impossible to modify.  As far as seeing someone else’s code in a scientific journal---written for different hardware from yours, naturally---and trying to test it out and play with it anyway was futile.  Artificial Intelligence was more than just a novelty, it was still a pipe dream.

Consumers were no longer purchasing computers because of what they were made from---genuine IBM/Intel parts.  Consumers purchased computers because of what they are used for and what they could do.  The consumers and software vendors defined those rules.  The software manufacturers were defining the personal computer industry.  The hackers were defining the software industry. 

The software manufacturers made a lot of money.  Many of the hackers did not make a lot of money.  Many companies rewarded the programmers behind their profitable products quite well.  Other companies did the best they could to spread the wealth, while some did not.  Some companies simply had too many people working on a single product to make all of them multimillionaires.   One such company was Microsoft.

During the PC Clone wars, manufacturers lost some of their independence to build the products that they wished.  Clever marketing campaigns had consumers purchasing the most generously bundled clones, which were naturally packed with Microsoft software.  Microsoft was rumored to have been pushing PC manufacturers into shotgun weddings with the bootstrap loaders.  But, Microsoft did not completely take over.  Suits were filed, laws were changed, new suits were filed, and other software vendors were no longer being locked out of the OS market.  But several enemies had been made or vanquished.

But in the end, the hackers won.  The hacker had made a comeback.  In some cases it was not a pretty one.  A lot of excellent programmers lost their jobs through no fault of their own or their company.  Big Blue had been surpassed by Big Bill as the king of computer software.  Big Bill had made some enemies through the use of business tactics long rumored to be ruthless.  Some hackers found a way to get even through ruthless software. 

Rudy  =8^D

A Brief History - 8 - The Manufacturers Strike Back

PART 8: Silicon Wars: The Manufacturers Strike Back

"A Brief History of Code", by Rudedog Hawkins

PART 8: Silicon Wars: The Manufacturers Strike Back

A couple of long-time hacker friends---both named Steve---got together in a California garage in the mid-70s with the idea to make owning and using a computer far less complicated.  Their idea was to sell it pre-built, instead of in kit form, and enclosed in an attractive looking enclosure that resembled a typewriter.  Believe it or not, most all of the first low-cost computers were sold in the form of kits to reduce cost, most of which did not even include a cabinet to house the things.  Some, like the IMSAI 8080, were just an open rack of circuit boards so that the boards could remain cool. 

These hackers created the first product that we would think of now as a personal computer around a new advanced microprocessor, the 8-bit 6500 by MOS Technology, Inc., all on a single circuit board.  A single board computer was just as much a breakthrough as the original microprocessor itself.  Though the product was flawed from a features perspective, it still sold well enough for them to make a tidy profit. It was not as flexible and easy to use as they desired from a software perspective.  So, the pair got together with another hacker friend to help them write a new OS, gave it a version of BASIC, and the Apple II was born.

But, this also had another unforeseen consequence.  Apple Computer grew by leaps and bounds.  Support industries grew up around it in California.  Hardware and software support companies alike.  And quite naturally, some rivals sprung up in the area.  Rightly so, rivals recognized the need for support industries if their upstart companies were going to succeed.  Silicon Valley was being forged.

Apple Computer had found a new market, average consumers.  Established manufactures scoffed at the PC idea, but eventually tried to jump into the game.  The IBM PC personal computer launched the microchip industry to the status of “darling” on Wall Street when it was introduced in 1981. 

Unlike, DEC, IBM decided to throw their hat in the ring in the burgeoning personal computer industry.  Unfortunately IBM also decided to hedge their PC bet and continued to invest heavily on their own mainframes that computed with those sold by DEC.  The business community wasn’t jumping onto the personal computer bandwagon just yet.  They were simply not as cost-effective as the classic Mainframe and its’ array of dumb terminals of the time.

The IBM PC was an instant success, most probably because it bore the IBM brand name.  IBM quickly improved upon it with the PC XT and the PC AT.  Big Blue had set the bar for personal computer market.  Some jumped in after IBM, a little too late for original designs, only to discover a non-existent market share.  But, they quickly discovered a new market with the average consumer.  The “home computer” market exploded into being creating an entirely new front line in the Silicon War.

Others jumped in with IBM PC work-alike clones with some moderate to overwhelming success.  This sparked another war front known as the PC Clone Wars.  The Clone Wars had sparked a heated legal battle over the new concept of intellectual property.  IBM claimed that clones had copied IBM’s internal programming.  The cloners claimed that their internal programming was different enough to be deemed unique.  The primary vendor of PC-DOS, Microsoft, provided this loophole to the cloners.  The courts agreed with the cloners.  Microsoft had provided each cloner with a short unique segment of code to include into their machines to allow all of them to run the exact same OS package.  This code segment was called a bootstrap loader, the program that loads the OS,

Some of the first leaders in the clone market were Dell Computer out of Texas, Compaq Computer out of Texas and Bentley Systems in Pennsylvania.  Dell succeeded because they introduced an innovative portable design.  Compaq introduced the first portable PC clone.  Bentley products were available only by mail order and failed because they offered slow and miserable service. 

Servicing computers opened up yet another front in the Silicon wars.  Bentley had served as the poster child of how not to do it.  Gateway Computers of Washington state succeeded with mail order, as did Dell, by providing reliable service.  The need for good computer service would lead to a proliferation of national computer chains that sold and serviced computers so that consumers would not have to deal with the expense, the inconvenience, and the turn around delays associated with shipping a personal computer by mail for servicing. 

As a result, shipping companies sprung up overnight guaranteeing quick and accurate overnight delivery.  These companies undercut UPS and the USPS.  Most notable of these new delivery companies was Federal Express, which created the innovative business model copied throughout.  The delivery industry came along at just the right time.  Shopping from home had suddenly become far less costly, and much more convenient.  The later rise of the World Wide Web would eventually spell the demise for most, if not all, of the national computer chain stores.  The only survivors were those that did not limit themselves to just computers and electronics.

By the mid-80s, the rate of CPU performance was beginning to show signs that CPUs might be able to break the AI barrier someday soon within people’s lifetimes.  The AI enthusiasts found new life as the hardware was beginning to catch up with what was needed for true artificial intelligence.  So they continued practice their software crafts on smaller scales, preparing for the day when the hardware was robust enough.  The hardware wasn’t quite there yet, but they pushed on with the software development. 

The software drove the hardware development in some circles.  Hardware development drove software development in others.  The successes came from those who found the right balance between the rivaling hardware and software engineers.  There was a period when RISC, Reduced Instruction Set Computer, processors were the rage.  Established companies, such as DEC, that had scoffed at the personal computer as just a passing fad led this drive thinking they could create a new RISC-based, high-priced, high-performance market of their own.

The thinking was less is best.  As far as the aerospace industry of the time was concerned, this was the absolute truth.  Engineers realized that the miracle of integrated circuitry had some physical limitations that when reached someday, they could not be overcome.  Research pushed for more efficiency.

Strides were made to put more punch into the same size packages.  Personal computers were introduced that used the RISC processors, and these products also came with their own versions of some of the established languages.  As well made as these products may have been, they did not succeed over the long term because of their software. 

One product that comes to mind is the DEC Rainbow PC, which tried to introduce color graphics superior to IBM PCs.  Unfortunately for DEC, Apple Computer was about to beat them in the “alternative PC” market with the announcement of the Macintosh PC a year later.  Apple showed it off almost a year ahead of its’ actual introduction.  A deadly shake out ensued amongst the hardware manufacturers.  Some took years to collapse, but eventually fold they did.  DEC did not find a pot of gold at the end of its’ Rainbow, instead it found an eventual grave.

Not many application manufacturers were willing to rewrite programs just to run on someone’s new PC no matter who made it.  There were just too many new brands out there.  Instead of investing their own money into a new product, they preferred to wait and see how well the new PCs sold.  Without the latest versions of software, the products didn’t sell well.  The industry leaders were too well established for latecomers…no matter who the gatecrashers were.

Some computer makers were learning a hard lesson.  Software was beginning to define the rules.  Software didn’t need a computer to run on.  A computer needed software to make it run.  A PC without software was worthless.  Seems kind of obvious, doesn’t it?  Some manufacturers appeared to not to have seen it that way and paid dearly for their lack of vision.  Software manufacturers knew it, especially Microsoft.

The PC Clone wars fully erupted when hardware manufacturers began bundling software with the products to make them more attractive to consumers.  Software vendors demanded a cut, and through litigation they got it.  The most long lasting legal issue was fought over which OS could get bundled with whose PC Clone. 

Microsoft was charging higher prices to manufacturers who did not include a Microsoft OS.  Microsoft was trying to force manufacturers to include their OS bundled with the various clones.  It reached a point where it was more expensive to buy an OS separately than it was to buy a bundled clone.  Other OS vendors and consumer advocacy groups sued Microsoft under existing monopoly and racketeering laws.  They lost.  Existing law did not account for software.

The Silicon War and the PC Clone War left a major transformation on all major chip companies.  IBM still exists.  DEC no longer exists.  Only the companies to really flourish were the software vendors, and the clone makers who did not have a large investment in research and development.

MOS Technology survives today making dedicated microcontrollers, not CPUs.  Apple Computer chose the MOS 6502 for the original Apple II PC.  Apple chose Motorola for the big brother Macintosh.  This left MOS on the outside as the Apple II was technologically surpassed and eventually dropped from the Apple line of products.

Zilog is unrecognizable as a chipmaker, and makes most of its’ rapidly dwindling profits from software.  IBM wanted to use a chip that was familiar to them when they introduced the IBM PC.  The 8xxx family of chips used a software interrupt architecture similar to that used in IBM mainframes.  The Zilog chips were hands down superior to the Intel chips.  IBM went with the lowest bid from Intel.  Not even the belated DEC Rainbow could lift Zilog to the size of Intel.

Motorola got out of the CPU business and has focused on cell phones.  The original Apple Macintosh used the Motorola 68xxx family of CPUs.  They were fast and ideally suited for fast I/O thanks to their hardware interrupt design.  But, a hardware interrupt meant that you were limited by how many interrupts could be defined by how many pins were on the final chip.  A true multi-tasking OS needed to have an unlimited number of interrupts.  Apple switched over to Intel chips and Motorola sold its’ CPU business and went back to their roots, communications.

Rudy  =8^D

A Brief History - 7 - A New Hope

PART 7: Silicon Wars: A New Hope

"A Brief History of Code", by Rudedog Hawkins

PART 7: Silicon Wars: A New Hope

A new hope for realizing AI came down the pipe.  Introduce the microprocessor and microcontroller, circa 1971, by Intel and Texas Instruments respectively.  The two companies were based out of Texas.  The established industry leaders of the time were in the New England.  Digital Equipment Corporation, DEC, was based out of Lexington, Massachusetts.   International Business Machines, IBM, was based out of Armonk, New York.  Other soon to be major players in the coming Silicon War were MOS Technology in Pennsylvania, Motorola in Illinois, and Zilog in California.  One each was located in the East, West, and mid-West.

In retrospect, company location may have proven to be critical.  Innovation without obvious usefulness was in the West.  Stagnation and intractability was rampant in the East.  At the time, California was the cultural hot spot in the U.S. with the rest of the country following the lead.  Mass-market consumer oriented companies producing products such as movies, albums, and fashions were moving to California in droves.  New England held itself up as the standard bearer of American society.  You had Ivy League schools, The Statue of Liberty, and apple pie.  The Mid-West found itself feeding off the best of both worlds.

I should note that Motorola’s 6800 CPU, designed by an ex-MOS employee, used the same core instruction set as the MOS 6500.  Curiously, Zilog’s Z80 series was designed by an Intel ex-employee, and used the same core instruction set as the Intel 8080.  The political stage for the coming Silicon War was being set.  It would be a battle that would dramatically transform all of the companies involved.  Out of the survivors that remain today save one, Intel, none rely on making their major profits from manufacturing microprocessors, and/or personal computers.

The 6xxx and 8xxx CPU “families” were most notable for their fundamental design differences in interrupt design and the size of their instruction sets.  The 6xxx family used a smaller set with hardware interrupts, while the 8xxx family used a much larger set with software interrupts.  These same fundamental differences also existed between industry leading DEC and IBM mainframes and mini-computers of the time period.

That first Intel microprocessor, 4004, was one chip in a multi-chip set that performed the function of a general-purpose computer.  The TI microcontroller, TMS1000, was a scaled down version of most of the general-purpose functions found in a multi-chip set into a single device that was designed for dedicated applications.  These devices incorporated what is known LSI, Large Scale Integrated circuitry.

Integrated circuitry had been around for several years.  Ever since the invention of the solid-state transistor in the 1950s, engineers had been building complex analog and digital circuitry on a single silicon slab.  But, never had any miniaturization and integration been done on the scale demonstrated by these new products.  The first computers were fabricated using vacuum tubes.  The 6502 CPU was no larger than a postage stamp, but its’ equivalent manufactured from vacuum tubes would have been the size of a commercial cruise ship.  Today’s chips would have equivalents the size of the island of Manhattan, and in some cases orders of magnitude larger than that.  The digital revolution had just landed its’ first man on the moon.

This miniaturization carried the side benefit of higher speed many reasons.  One reason, the signals had less distance to travel, which meant that Transmission Line effects from circuit board tracings were greatly reduced.  Since all of the circuitry was cast from the same silicon slab then all of the circuitry could be perfectly matched, which meant that manufacturing variations in silicon purity were negated. 

Previously, a CPU was comprised of dozens of discrete digital chips that filled at least one entire circuit board.  Most CPUs had separate boards for the separate functional areas found in CPU designs.  Now, the equivalent of a few circuit boards were now encoded onto a single chip!  The microprocessor represented brute force miniaturization, LSI, on a massive scale. 

The microcontroller took a slightly different direction by going for all out integration.  An entire advanced multi-chip set was on a single chip.  Putting it all on to one chip allowed for better performance at the tradeoff of versatility.  While this may have locked you into a specific chip set, the intended use of the product was that of dedicated CPU running only one program, all of the time.  They made for some great alarm clocks, wristwatches, and calculators.  Every consumer had to have at least one digital gadget.  A decade later, there were microcontrollers being created that had onboard interpreters for high-level computer languages like BASIC and FORTH.

This naturally led to a cycle of entrepreneurs who started up companies in the hopes of getting rich quick.  It didn’t work for almost all of them.  The startup costs and proprietary technology required for manufacturing chips proved to be too prohibitive for the small guy.  Instead, established manufacturers in the industry funded divisions or entirely separate companies to produce integrated circuit chips.  Everyone realized the double bonus profit to be had from the big-ticket item, building the microprocessor-based CPU and microcontrollers.  And large profits were there just for the taking. 

Even the stalwart DEC, whose mini-computers had helped put the first man on the moon, eventually jumped into the fray.  DEC was one of the last major companies holding back from entering a market perceived to be too volatile, and soon to be short-lived for many of the players and wannabes.  To a large degree, this assessment was quite accurate.  Too many companies were developing products that would too frequently be obsolete before they could bring them to market. 

DEC had preferred to stick to their mainstay cash cow, producing expensive mini-computers for business use.  Prior to microprocessors, most commercially sold computers came in two sizes.  A mainframe was a high-performance computer in a rack as large as a home refrigerator. A mini-computer was a much smaller, less powerful system, which could vary in size from a modern microwave to dishwasher.

Low cost microprocessors led directly to the commercial availability of low-cost computers for consumers.  At least low cost compared to the tens and hundreds of thousands that most mini-computers of the day cost.  Hobbyists could now purchase complete kits to fabricate their own computers for the cost of a television set.  But, you had to be a real enthusiast or an engineer to understand how to construct and use the things.  You had to be a hacker who didn’t mind losing some sleep, lots of sleep.

Low cost computers were an existential threat to both IBM and DEC, but they were blinded by their own size and momentum to see just how deadly the new threat could be. 

Rudy  =8^D

A Brief History - 6 - The Origins of Objects

PART 6: Object Trek: The Origins of Objects

"A Brief History of Code", by Rudedog Hawkins

PART 6: Object Trek: The Origins of Objects

When the first computers began to “think” in the 1950s, engineers realized that their artificial brains were not really thinking at all.  It was only those who did not understand how they worked who perceived that a computer could think for itself.  The engineers knew all along that the CPU executed a sequence of pre-written, pre-determined instructions, and they did little to disavow the public of that notion.  Doing so had the effect of adding mystique to the practitioners of the trade and kept nosey bodies away.  The fact that cutting edge development was in the interests of national security helped a little bit, too.

As much as engineers wanted to create an artificial mind that they could converse with, it was just simply beyond their current understanding and technology.  The CPU was simply too slow compared to the human brain.  The BRAIN.  That was it!  They had a real model to imitate.

The human brain reacts to inputs, and creates outputs.  For example, the brain can hear music through the ears, which also stimulate pleasure centers that produce hormones that put us into a good mood.  Engineers realized that a computer could do the same thing, just on a smaller scale.  They recognized that their computers had a limited number of inputs compared to the human brain. 

Engineers also recognized the most critical differences.  The brain had dedicated areas for various types of processing, and seemed to have the capacity for simultaneous processing of inputs, each of which could produce an independent output.  Gee, we could walk and talk at the same time!  Sometimes the obvious is hidden because it is plain sight.  The dream to invent artificial intelligence had a direction.  Imitate life.  Didn’t they do that trying to invent the airplane?

It was not until the mid and late 1950s that engineers began to think of more creative concepts to model and imitate the human brain in a computer program.  A computer program can be thought of as a list of computer instructions.  They began to write “lists” that represented computer code that performed specialized tasks just like the separate areas of the brain.  One of the oldest computer high-level languages, and is still in use today, was designed at this time to work in this fashion.  The language was called LISP, which was short for List Processor. 

A program was written to interpret these instruction lists, and the first formal Interpreter for a higher-level language was born.  Engineers already had standardized syntaxes for the lists of instructions to write binary code, Assembly Language.  Since everyone wanted to get in on the ground floor of high-level languages, a higher-level standard syntax were being conceived and a published as an industry standard.  They wanted a syntax that could be used on more than one type of CPU provided you had the proper Interpreter.   Another language introduced at this time was Fortran, short for Formula Translator. 

Primarily engineers in the scientific and research fields were doing computer hardware and software development work during this time period.  Focus was put on turn around time for solving long and complex equations.  Engineers wanted to enter their actual formulas and let the CPU interpret and solve them, instead writing computer code to solve one specific equation.  But, interpreted code was notorious for its’ slow execution compared to binary instructions. 

Some engineers still wanted to imitate the multitude of inputs available to the human brain.  Inputs, which when stimulated, caused specific sections of the brain to activate and process the stimulus.  They wanted to give an AI the ability to see, hear, feel, and eventually think and reason.  But, engineers of the day fell short of imitating the human brain.  They soon realized that the human eye was not made up on a single vision signal, but rather human vision was comprised of millions of sub signals, each being generated by the separate rods that made up the human retina.

Truly imitating Life required far more computer capacity than what they had available.  They needed to split the functionality of one list, into multiple identical lists.  So, they gave their instruction lists the characteristic to be identified with arbitrary symbols.  The original list became a roadmap for how the sub-lists of that type should operate and behave.  They allowed for the roadmap to be provided with data to define how an individual sub-list should behave.  These were the first “objects” to be used in computer programming.  These roadmap definitions and resulting objects were not very different from the class definitions and instance objects that we use today. 

They were also stuck on figuring out all of the pieces and parts that were needed to make up an actual human brain model.  Modern medicine couldn’t even tell them all of the details.  They did realize that countless specialized pieces and parts were needed.  But, what?  And so, progress towards realizing an artificial brain was stymied by available science and technology throughout the ‘60s, ‘70s, and even most of the ‘80s. 

Work with objects did not stop, however.  It proliferated.  Engineers continued to improve upon the computer languages and software theory that used objects.  Although the concept of software objects was becoming better understood, they were still expensive to use in the ‘60s and ‘70s because of how much memory they consumed.  Then along came the microprocessor, which introduced LSI, Large Scale Integration, of computer chips.

It was soon recognized that the pace of hardware improvement would quickly render the memory issue moot.  Which is exactly what occurred in 1981 with the introduction of the microprocessor based IBM PC.  Although software objects were quickly moving from away from novelty and into reality as more people discovered them, they were still curiosities as seen by most.  The term geek, and the phrase computer nerd were born.  Their best friends were computers.

Rudy  =8^D

A Brief History - 5 - The Next Generation of Code

PART 5: Object Trek: The Next Generation of Code

"A Brief History of Code", by Rudedog Hawkins

PART 5: Object Trek: The Next Generation of Code

Computer languages underwent an evolution in how they were implemented in various computer platforms.  These implementations led to a separation of the developer’s source code from any dependency with the actual hardware that it will run on.  This process of separating two or more tightly coupled entities from one another is known as Abstraction.  Advances in Operating Systems, memory space and CPU performance made this evolution progress by leaps and bounds. 

Those who wrote Operating System software were becoming regarded as the true gurus of the software industry.  System Specialists spent long days and nights hacking away at keyboards writing code that was to be used by those who they perceived to have lesser skills, the people who wrote applications.  A sibling rivalry was developing between Operating Systems specialists and User Applications Specialists.  A race began to see who could outdo the other.

At the time, not many involved realized the true nature of the race that had begun, and where it would eventually lead.  The most talented of these insomniacs were affectionately named hackers.  Hackers were the innovators and visionaries at the cutting edge of software development.  The founders of Microsoft and Apple Computer fit that definition of hackers.

Necessity is the mother invention.  Each side in the rivalry needed the other but had no wish to admit their mutual dependence.  No one needed an expensive and fancy OS if there were no fancy and expensive programs to realize the power of the OS.  No one could write a fancy and expensive program if there were no fancy and expensive OS on which to host it.  Each side needed the other more than they knew.

However, some of the hackers did know and realize the true nature of the symbiotic relationship that was growing.  They realized that if the hardware could be abstracted, then so could the software.  In order for the software to be abstracted, then the rivaling sides would need to work together and not as adversaries because if one changed, then so would the other need to change to maintain the symbiosis.

The software was about to make a quantum leap in complexity.  The Procedure Driven code of the day was hard to write.  When computers and code were first invented in the 1950s, what is thought of now as Procedure Driven code is what was first created.  Procedure Driven code is what most of beginners are first introduced to when they begin programming.  The now classic “Hello World” program that introduces users to Visual Studio is basically a procedure driven program.  Just like a cooking recipe, it has a beginning and an ending.  The code is executed beginning with the first line and continues until the final line.

While there may be branching or looping instructions within the source code, the execution basically went from start to finish.  You even could have calls to subroutines, but a subroutine always returns to its’ calling point.  The code still began at the first line, and continued execution until it reached the final line.  Subroutines were also known as Procedures in some computer languages, which is what gives the programming style its’ name.

There is nothing wrong with writing procedural code.  It is a time tested and traditional way of writing code.  When it is well written, it is quite easy to understand.  Most of the popular databases today are most efficient when they are controlled by or interact with procedural code.  I feel that SQL, Structured Query Language, which is used by many databases, is a textbook example of procedure driven programming.

Many hackers of that time period wanted to expand their resumes to get a leg up on their rivals.  So they started looking to learn new languages and researched the exotic.  Many took up enthusiastic interests in something that was “new” at the time, Artificial Intelligence.  It was not anything new.  It wasn’t any newer than the ages old dream by mankind to be able to fly.  It was just new to them.

So the rivals studied, and they argued who was better, and they existed in their mutual dependence no matter how much they may have disliked each other.  Some wise folks from both sides began to explore the AI topic.  They even explored it together in some cases.  And they discovered some things.

Rudy  =8^D

A Brief History - 4 - The Library from the Edge of Tomorrow

PART 4: Object Trek: The Library from the Edge of Tomorrow

PART 4: Object Trek: The Library from the Edge of Tomorrow

The Microsoft computer language development packages underwent a major change that took the form of the .NET Framework.  The sheer volume of code in the release felt like a software Library of Congress being released by the U.S. government.  The .NET Framework represented a “5th Order” computer language design.  “4th Order” computer languages had been around for a couple of decades.  These languages could quickly compile source code into a custom intermediate form, a fourth form. This custom file could then be executed by a custom Interpreter much more quickly than one running form the original source code alone.  This custom intermediate file frequent meant that everything worked better if could fit inside of one custom binary assembly.

The “5th Order” designs were designed to eradicate the beast that long stood in the developer’s way, that being cross-platform and mixed-language program development.  While a “5th Order” design functions like a “4th Order” design in most every way, the difference is with the intermediate format of the code that is generated.  The .NET Framework introduced a standardized Intermediate Language, IL, as the output of the compilation process. 

The compilers for several languages were completely re-written to generate this new standardized IL.  Standardized IL meant that a run time Interpreter was required for the various manufacturers’ CPUs on the market.  More importantly, a standardized IL meant that the cross-platform and mixed-language problems of the past could be eliminated.  You no longer found it easier to fit everything into one assembly.  It didn’t matter anymore.  Re-usable code took on a new meaning and added dimension.

The .NET Framework also introduced another even more significant change.  Since all of the languages compiled to a standard IL, this meant that all of the languages would need to be compatible with some type of hardware design.  A brilliant design decision was made, abstraction.  They separated, or abstracted, the target hardware from the languages and created a virtual programming environment that did not care what the actual hardware configuration was when the program was actually executed. 

The details of implementing and executing the code on specific hardware would be left up to the responsibility of the Compiler and the IL Interpreter, which was also written by Microsoft.  The .NET Framework uses an IL Interpreter known as the Common Language Runtime, or CLR for short. 

From a developer’s perspective, all hardware resources were now just another piece of abstract software.  This even included actual hardware like disk drives, mice and memory.  This is a critical feature that many new and old-time developers have trouble fully understanding.  How the hardware works can be regarded as a “don’t care” condition under most circumstances.

An implementation of managed memory was introduced with the CLR for the .NET Framework.  In the past, developers originally had to overly concern themselves about hardware specific issues; i.e., such as which memory addresses were consumed by their program, other programs, the OS, and their own data.  This issue had been partially resolved with the introduction of multi-tasking operating systems, which used virtual memory. 

The existing implementation of virtual memory, which allowed the developer the freedom to not to worry about other programs, still found itself too dependent upon the hardware platform where the program was being executed.  While the developer was granted the freedom to assume that all available memory was his, he still had to worry about how much memory was available and where. 

The developer also had to worry about re-using available memory for data storage when writing more complex programs.  The developer had to clear out used memory before it could be re-used.  Sometimes the developer had to relocate large chunks of data stored in memory to make room for even more data.  The developer could not store data in a continuous segment of memory and had to resort to breaking it up into pieces with the resulting management problems associated with keep track of it as it grew and shrank dynamically.  Compounding the problem was using third party software libraries that didn’t seem to care how much memory they consumed, nor where.

Managing memory became just as much as part of the development task as actually designing and implementing the software’s features.  It had been long recognized that managing memory slowed development, but what could you do.  Managing memory was just as much a part of the development process, as an airplane having to deal with air friction when it is airborne.  It was a fact and a part of life. 

Managed memory pretty much freed the developer from those worries.  Only when the application software interacted with hardware resources would the developer need to worry managing memory.  Even then, it amounted to pretty much calling a subroutine known as Dispose to let the CLR know that the application was finished with interacting with a given hardware resource for now.  The same procedure was used to interact memory, disk files, graphics, audio, etc.  The developer was free to focus on the application’s features, and not on actually implementing the application on a specific platform.

Microsoft’s Visual Studio software uses the .NET Framework to provide developers with a sophisticated and powerful environment in which to flex their creative muscles.  This Integrated Development Environment, IDE, is targeted for the PC platform.  As of this writing, Microsoft has never introduced a version for the Mac platform.  The .NET Framework is now in its’ 4th generation.

Rudy  =8^D

A Brief History - 3 - The Attack of the Codes

PART 3: Silicon Wars: The Attack of the Codes

PART 3: Silicon Wars: The Attack of the Codes

While the hardware technology was growing like a wild fire, so was the software growing to keep pace with it.  In many cases, the software exceeded the capabilities of the hardware.  This latter kept customers going back for more.  They wanted the latest and the greatest.  Software compilers and interpreters had grown extremely sophisticated as the high-level languages added new keywords to interact with hardware such as mice, audio, disk drives and color graphics. 

The ANSI standardized C language added several “standard libraries” to leverage the features of the new IBM PC.  It grew so much that it was renamed C+.  Two languages had grown out of one.  After even more exponential growth of the standard C library, it became the language we see today known as C++.  Many of these newer libraries were developed specifically for the MS Windows multi-tasking environment.  At last it seemed that the computer market was settling down to a single Operating System. 

But a single OS was not in the cards.  While all of this growth and evolution took place with the IBM PC, its’ chief rival from Apple Computer was the Macintosh.  The “Mac” always seemed to be at least one step ahead of the “PC” in terms of technological features.  After all, Apple had introduced the personal computer to the world in late 1976 with the Apple I, and the following year with the legendary Apple II.  The IBM PC was not introduced until 1981.

These two computer giants began a marketing war that has lasted 40 years, with no end in sight.  Apple Computer began with a slight technological lead, and still seems to have one.  Features that initially appeared on a Mac, eventually found their way onto the PC.  The most significant of these features was the GUI driven Operating System, or graphical user interface.  After much litigation, the PC had its own GUI driven OS.  Because of the numerous PC clones, IBM lost market share and eventually sold its’ PC division.  But, the battle continued.

Each of these personal computers initially used different microprocessors, with distinctly different designs.  Each CPU design reflected an emphasis on tasks the CPU could be asked to perform.  The PC initially used an Intel design that relied on software interrupts, while the Mac initially used a Motorola design that relied on hardware interrupts.  Each had arguable advantages and disadvantages over the other.  The Motorola design was faster with most programs with intensive Input/Output, I/O.  The Intel design was better suited for crunching larger amounts of data that met certain characteristics.

As the respective hardware improved in speed and performance, these distinctions tended to blur and fade away for the user.  Today, the differences are all but moot.  The end product of the decades of competing technologies was that this resulted in various high-level languages being written and evolving on each platform.  There were even personal computers introduced that used the other CPU.  The languages and hardware for each design had become so complex that specialists were needed to write commercial programs for the personal computers.  These specialists of the past have grown in today’s software developers.

Still, the same old beast could raise its’ ugly head.  The problem of cross-platform development still existed.  Commercial software vendors had to hire teams of specialists for each platform.  Writing software was tied to the platform for which it was targeted.  Vendors had to select a language to use.  Most found themselves with libraries of identical programs written in different languages.  This was undeniably inefficient.  There had to be a better way.

As the two hardware platforms grew out of the pack to become industry leaders, a similar revolution was occurring with the application software.  One giant that emerged was Microsoft, which had intimate ties to both Apple and IBM with the introduction of each manufacturer’s initial personal computer.  The Apple II was introduced with its’ own unique DOS and a version of BASIC known as AppleSoft, which was written by Microsoft.  The IBM PC was introduced with an OS known as PC-DOS and a version of BASIC known as PC BASIC, which were both written by Microsoft.

With the introduction of the PC hastened the end of the relationship between Microsoft and Apple.  Apple introduced a more sophisticated personal computer, to compete with the now superior IBM PC, known as the Macintosh.  The Mac had its’ own software and OS written exclusively by Apple.  The “open architecture” Apple II experience caused Apple to keep the Mac as an “open architecture” design, which served to only feed the war between the industry leading designs.

Software specialists, as they were then known, soon found themselves moving from one commercial software vendor to another.  Software written for one platform using a given language had to be completely re-written from scratch to work with another language.  Automated translation from one language to another was still a long way in the future.  Vendors needed special specialists.

Microsoft was making record profits from the PC platform and decided to make a move that would help secure the future of software development on the PC by relieving the strain and cost of the development process.  Microsoft began selling software development packages in various high-level languages. 
Initially, the development packages were not exactly compatible with one another.  In time, it became possible to call assemblies written in one language from an assembly written in another.  As software became more sophisticated, so did the bugs, and so did newer, bigger problems.  Mechanisms had been put in place in the OS to allow software vendors to update their software assemblies by replacing them.  These mechanisms left most software vulnerable to hostile software with the growth of the Internet.  The code was attacking the hosts

But, as computer security became an ever-larger problem, it grew increasingly apparent that the Operating Systems needed to undergo a major overhaul in how they operated.  This meant that the computer languages used to develop commercial applications would initially need to undergo a similar change.  Not until the commercial applications changed how they operated, could the Operating System change how it operated in order to make itself more secure.  This is when Microsoft introduced the .NET Framework.

Rudy  =8^D