Tech in the 603, The Granite State Hacker

Hands on Surface

There’s a developing UI paradigm growing, and some of it has been released in the wild.

Multi-point touch screen systems are starting to take shape out of the ether, and it really feels like it’s going to usher in a new era of computing. We’ve talked about a few of them here in the Tech Mill. It’s “Minority Report” without the goofy VR glove.

Microsoft’s offering in this arena is Surface (formerly “Milan”).( http://www.microsoft.com/surface )

From available marketing materials, Surface is much like the other offerings that are under development, with a few interesting differences. Rather than being an interactive “wall”, it’s a “table”. In addition to interacting to a broad range of touch-based gestures, Surface also interacts with objects. Some of it’s marketed use-cases involve direct interaction with smartphone, media, and storage devices.

This week, I’m on a training assignment in New Jersey, but within a bus ride to one of very few instances of Surface “in the wild”.

I made it a secondary objective to hit one of the AT&T stores in mid-town Manhattan.

I had a lot of high expectations for it, so actually getting to play a bit with it struck me as a touch anti-climactic. The UI was great, but it was clear they cut costs on hardware a bit: responsiveness wasn’t quite as smooth as the web demos. It did impress me with the physics modeling of the touch gestures… dragging “cards” around the table with one finger mimicked the behavior of a physical card, pivoting around the un-centered touch point as a real one would.

I was also a bit concerned that the security devices attached to the cell phones they had around the table were some sort of transponder to hide “vapor-ware” special effects. My own phone (an HTC Mogul by Sprint) was ignored when I placed it on the table.

All in all, I was happy to finally get to play with it. Between technology advances and price drops, this UI paradigm will start to make it into the power business user’s desk.

I mean, can you imagine, for example, cube analysis…. data mining… report drilling… and then with a few gestures, you transform the results into charts and graphs… then throw those into a folder on your mobile storage / pda device…

I’m still loving the idea of interactivity between physical and virtual (and/or remote) logical constructs…

Imagine bringing up the file server and your laptop on a “Surface” UI, and litterally loading it with data and installing software with the wave of your hand….

or…

Having a portable “PDA” device with “big” storage… in fact, big enough to contain a virtual PC image… In stand-alone mode, the PDA runs the VPC in a “smart display” UI. When you set it on a Surface, the whole VPC sinks into it. You get access to all the Surface functional resources including horsepower, connectivity, additional storage, and the multi-touch UI while the PDA is in contact. When you’re done, the VPC transfers back to the PDA, and you can take it to the next Surface UI in your room at the hotel, or the board room (which has one giant “Surface” as the board room table.)

The preview is over at AT&T today. According to Wikipedia, Microsoft expects they can get these down to consumer price ranges by 2010 (two years!).

Tech in the 603, The Granite State Hacker

Multiprocessing: How ’bout that Free Lunch?

I remember reading an article, a few years back…

The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software

Its tagline: “The biggest sea change in software development since the OO revolution is knocking at the door, and its name is Concurrency.”

Mr. Sutter’s article suggests that because CPUs are now forced to improve performance through multi-core architectures, applications will need to typically employ multi-threading to gain performance improvements on newer hardware. He made a great argument. I remember getting excited enough to bring up the idea to my team at the time.

There are a number of reasons why the tag line and most of its supporting arguments appeared to fail, and in retrospect, could have been predicted.

So in today’s age of multi-core processing, where application performance gains necessarily come from improved hardware throughput, why does it still feel like we’re getting a free lunch?

To some extent, Herb was right. I mean, really, a lot of applications, by themselves, are not getting as much out of their host hardware as they could.

Before and since this article, I’ve written multi-threaded application code for several purposes. Each time, the threading was in UI code. The most common reason for it: to monitor extra-process activities without blocking the UI message pump. Yes, that’s right… In my experience, the most common reason for multi-threading is, essentially, to allow the UI message pump to keep pumping while waiting for… something else.

But many applications really have experienced significant performance improvements in multi-processor / multi-core systems, and no additional application code was written, changed, or even re-compiled to make that happen.

How?

  • Reduced inter-process contention for processor time
  • Client-server architectures (even when co-hosted, due to the above)
  • Multi-threaded software frameworks
  • Improved supporting hardware frameworks

Today’s computers are typically doing more, all the time. The OS itself has a lot of overhead, especially Windows-based systems. New Vista systems rely heavily on multi-processing to get performance for the glitzy new GUI features.

The key is multi-processing, though, rather than multi-threading. Given that CPU time is a resource that must be shared, having more CPUs means less scheduling collision, less single-CPU context switching.

Many architectures are already inherent multi-processors. A client-server or n-tier system is generally already running on a minimum of two separate processes. In a typical web architecture, with an enterprise-grade DBMS, not only do you have built-in “free” multi-processing, but you also have at least some built-in, “free” multi-threading.

Something else that developers don’t seem to have noticed much is that some frameworks are inherently multi-threaded. For example the Microsoft Windows Presentation Foundation, a general GUI framework, does a lot of its rendering on separate threads. By simply building a GUI in WPF, your client application can start to take advantage of the additional CPUs, and the program author might not even be aware of it. Learning a framework like WPF isn’t exactly free, but typically, you’re not using that framework for the multi-threading features. Multi-threading, in that case, is a nice “cheap” benefit.

When it comes down to it, though, the biggest bottlenecks in hardware are not the processor, anyway. The front-side bus is the front-line to the CPU, and it typically can’t keep a single CPU’s working set fresh. Give it a team of CPUs to feed, and things get pretty hopeless pretty quick. (HyperTransport and QuickPath will change this, but only to the extent of pushing the bottle necks a little further away from the processors.)

So to re-cap, to date, the reason we haven’t seen a sea change in application software development is because we’re already leveraging multiple processors in many ways other than multi-threading. Further, multi-threading options have been largely abstracted away from application developers via mechanisms like application hosting, database management, and frameworks.

With things like HyperTransport (AMD’s baby) and QuickPath (Intel’s), will application developers really have to start worrying about intra-process concurrency?

I throw this one back to the Great Commandment… risk management. The best way to manage the risk of intra-process concurrency (threading) is to simply avoid it as much as possible. Continuing to use the above mentioned techniques, we let the 800-lb gorillas do the heavy lifting. We avoid struggling with race conditions and deadlocks.

When concurrent processing must be done, interestingly, the best way to branch off a thread is to treat it as if it were a separate process. Even the .NET Framework 2.0 has some nice threading mechanisms that make this easy. If there are low communications needs, consider actually forking a new process, rather than multi-threading.

In conclusion, the lunch may not always be free, but a good engineer should look for it, anyway. Concurrency is, and will always be an issue, but multi-core processors were not the event that sparked that evolution.

Tech in the 603, The Granite State Hacker

Metaware

There’s been a fair amount of buzz in the IT world about IT-Business alignment lately. The complaints seem to be that IT seems to produce solutions that are simply too expensive. Solutions seem to range from “Agile” methodologies to dissolving the contemporary IT group into the rest of the enterprise.

I think there’s another piece that the industry is failing to fully explore.

I think what I’ve observed is that the most expensive part of application development is actually the communications overhead. It seems to me that the number one reason for bad apps, delays, and outright project failures, is firmly grounded in communications issues. Getting it “right” is always expensive. (Getting it wrong is dramatically worse.) In the current IT industry, getting it right typically means teaching analysts, technical writers, developers, QA, and help desk significant aspects of the problem domain, along with all the underlying technologies they need to know.

In the early days of “application development”, software based applications were most often developed by savvy business users with tools like Lotus 1-2-3. The really savvy types dug in on dBase. We all know why this didn’t work, and the ultimate response was client-server technology. Unfortunately, the client-server application development methodologies also entrenched this broad knowledge sharing requirement.

So how do you smooth out this wrinkle? I believe Business Analytics, SOA/BPM, Semantic web, portals/portlets… they’re providing hints.

There have been a few times in my career where I was asked to provide rather nebulous functionality to customers. Specificially, I can think of two early client-server projects where the users wanted to be able to query a database in uncertain terms of their problem domain. In both of these cases, I built application UI’s that allowed the user to express query details in easy, domain-specific terms. User expressions were translated dynamically by software into SQL. All of the technical jargon was hidden away from the user. I was even able to allow users to save favorite queries, and share them with co-workers. They enabled the users to look at all their information in ways that no one, not even I, had considered before hand. The apps worked without giving up the advances of client-server technology, and without forcing the user into technical learning curves. These projects were both delivered on time and budget. As importantly, they were considered great successes.

In more recent times, certain trends that have caught my attention: the popularity of BI (especially cube analysis), and portal/portlets. Of all the other tools/technologies out there, these tools are actively demanded by business end-users. At the same time, classic software application development seems to be in relatively reduced demand.

Pulling it all together, it seems like the IT departments have tied business innovation into the rigors of client-server software application development. By doing this, all the communications overhead that goes with doing it right are implied.

It seems like we need a new abstraction on top of software… a layer that splits technology out of the problem domain, allowing business users to develop their own applications.

I’ve hijacked the word “metaware” as a way of thinking about the edge between business users as process actors (wetware) and software. Of course, it’s derived from metadata application concepts. At first, it seems hard to grasp, but the more I use it, the more it seems to make sense to me.

Here is how I approach the term…
Application Space. This diagram shows the surface of IT to User domains across technology and business process space. This surface covers hardware, software, metaware, and wetware, including where these 'wares touch.
As I’ve mentioned in the past, I think of people’s roles in business systems as “wetware“. Wikipedia has an entry for wetware that describes its use in various domains. Wetware is great at problem solving.

Why don’t we implement all solutions using wetware?

It’s not fast, reliable, or consistent enough for modern business needs. Frankly, wetware doesn’t scale well.

Hardware, of course, is easy to grasp… it’s the physical machine. It tends to be responsible for physical storage and high-speed signal transmission, as well as providing the calculation iron, and general processing brains for the software. It’s lightening fast, and extremely reliable. Hardware is perfect in the physical world… if you intend to produce physical products, you need hardware. Hardware applications extends all the way out to wetware, typically in the form of human interfaces. (The term HID tends to neglect output such as displays. I think that’s an oversight… just because monitors don’t connect to USB ports doesn’t mean they’re not human interface devices.)

Why do we not use hardware to implement all solutions?

Because hardware is very expensive to manipulate, and takes lots of specialized tools and engineering know how to implement relatively small details. Turnaround time on changes makes it impractical in risk-management aspects for general purpose / business application development.

Software in the contemporary sense is also easy to grasp. It is most often thought to provide a layer on top of a general purpose hardware platform to integrate hardware and create applications with semantics in a particular domain. Software is also used to smooth out differences between hardware components and even other software components. It even smooths over differences in wetware by making localization, configuration, and personalization easier. Software is the concept that enabled the modern computing era.

When is software the wrong choice for an application?

Application software becomes a problem when it doesn’t respect separation of concerns between integration points. The most critical “integration point” abstraction that gets flubbed is between business process and the underlying technology. Typically, general purpose application development tools are still too technical for user domain developers, and so quite a bit of communications overhead is required even for small changes. This communications overhead is becomes expensive, and complicated by generally difficult deployment issues. While significant efforts have been made to reduce the communications overhead, these tend to attempt to eliminate artifacts that are necessary for the continued maintenance and development of the system.

Enter metaware. Metaware is similar in nature to software. It runs entirely on a software-supported client-server platform. Most software engineers would think of it as process owners’ expressions interpreted dynamically by software. It’s the culmination of SOA/BPM… for example BPMN (Notation) that is then rendered as a business application by an enterprise software system.

While some might dismiss the idea of metaware as buzz, it suggests important changes to the way IT departments might write software. Respecting the metaware layer will affect the way I design software in the future. Further, respecting metaware concepts suggests important changes in the relationship between IT and the rest of the enterprise.

Ultimately it cuts costs in application development by restoring separation of concerns… IT focuses on building and managing technology to enable business users to express their artifacts in a technologically safe framework. Business users can then get back to innovating without IT in their hair.

Tech in the 603, The Granite State Hacker

The Mature Software Industry, Corporate Specialization, p2

In driving down I-293 around the city of Manchester one night this weekend, I noticed some of the old factories across the river were lit up so you could see the machinery they contained. Those machines have probably been producing goods reliably for decades.

In my last post, (“Corporate Specialization“) I used an analogy of power plants to describe how software engineering groups might someday fit into the corporate landscape.

I found myself thinking that a more precise analogy would be to liken software application development to… hardware application development.

When it comes down to it, hardware, software… the end user doesn’t care. They’re all just tools to get their real jobs done.

I remember seeing this when I was a kid. I recall observing that when the Atari 2600 has a cartridge inserted, and it’s powered on, the hardware and software components were functionally indistinguishable. The complete system might as well be a dedicated-purpose hardware machine. It became an appliance.

Modern platforms don’t really change that appliance effect to the end-user.

So, aside from operators, I’m sure these classic B&M manufacturers have technical people to maintain and manage their equipment. I’d be surprised to find out that they keep a full team of mechanical engineers on the staff, though. It seems to make sense that a mature software development industry will start to look much more like that over time.

Further, take a look at computer hardware. It’s undergone some maturing over the past few decades too. There really aren’t many companies that actually bake their own. I remember tinkering a bit with chips & solder. Then, I started building PC’s from components. While my current desktop at home is still one of my own “Franken PC’s”, I think it’s been over a year since I even cracked the case on it. I suspect that when I finally break down & replace the thing, it will be 100% Dell (or maybe Sony) [and Microsoft].

With respect to software engineering, never mind all that FUD we’re hearing about IT getting sucked into business units. That’s good for “operators” and “maintenance” types, who become analytics and process management in the new world order. I suspect the heavy lifting software engineering will eventually be done mostly by companies that specialize in it.

With that, it might be more educational to look at the histories-to-date of the industrial-mechanical and electrical engineering groups to see the future of the software engineering group.

I think this might be where MS & Google are really battling it out, long term. As the software industry continues to mature, the companies with the most proven, stable software components will end up with the market advantage when it comes to building “business factories” out of them for their clients. …or maybe it will just be about the most matured development and engineering processes… or maybe both… ?

Tech in the 603, The Granite State Hacker

Corporate Specialization

There’s an old adage: “Anything you can buy, you can produce yourself for less.”

In our business, we’re well aware that there are a few fundamentally flawed assumptions with that sentiment. Despite the barriers to entry and many other recurring costs, somehow the idea seems pervasive in the business world.

I started my career in a consulting shop that insisted it was a product company. Then I moved, through market forces, into products based companies. I stayed on board with some of them long enough to help shut out the lights and unplug the servers when the sales didn’t hit the marks. The other big problem I’ve seen with product shops was that it’s engineering group typically went through its own “release cycles”… once the product was released, my team was cut to the bone.

I’ve never been in a classic IT shop, per se, but I’ve definitely been on tangent IT-support projects within “product” oriented companies. In IT groups, I’ve often thought that companies might see IT projects as frills and overhead. At some level, I think the pervasive IT-alignment is a counter measure to that idea. Still, it seems IT projects are typically viewed as liability, rather than asset. When it comes time for the company to refocus on its core competencies, (which is inevitable in the ups & downs of business) IT projects are prime targets for cutbacks.

Since the “.COM bust”, in these types of companies, an engineer on the staff for just three years is often seen as a “long-timer”… but no one feels bad about that, since a lot of these companies fold in that time frame, as well.

After experiencing the product based companys’ ups & downs, and seeing many colleagues who had already been through the IT shops, I’m convinced… the outsourced project / consulting route is really the wave of the future, even more than it has been in the past. It’s only a matter of time before business admits this necessity as well.

It makes sense… I wouldn’t figure on every company to own & operate their own power plant.

Why should they typically want their own software engineering group if they don’t have to?

[Edit: See also Part 2 Here]

Tech in the 603, The Granite State Hacker

The Wetware Bus

A colleague pointed me to this post about display size improving productivity.

It brings up a favorite topic of mine… With my current laptop (@1024×768), it’s a bit of a pet-peeve.

It totally makes sense that tuning bandwidth to wetware improves productivity. Display real-estate is the front-side bus to wetware!

That’s why I also like the Dvorak keyboard layout… Personally, I never achieved the level of proficiency with twenty +/- years of QWERTY (with formal training) that I have with the Dvorak layout over the past three years. Dvorak’s a hard curve to get past, but I committed to it in the interest of tuning bandwidth (and maybe hoping to reduce RSI).

Tech in the 603, The Granite State Hacker

All Exceptions Are Handled

A recent colleague of mine was fond of pointing out that “all software is tested.” This truism is a simplification, of course, but the basic premise is this: the bugs will be exposed. The only question is “by whom?”

I submit for your consideration another truism, perhaps a corollary to the former: “all exceptions are handled.” Yes, ALL exceptions… and I’m not just talking about a top-level catch-all exception handler.

That requires further explanation. In order to fully back that up, let’s talk about what a typical exception handling mechanism really is.

According to Bjarne Stroustrup in his classic (“The C++ Programming Language”), “the purpose of the exception-handling mechanism is to provide a means for one part of a program to inform another part of a program that an ‘exceptional circumstance’ has been detected.”

Unfortunately, Stroustrup’s book was published a decade ago, and the consensus on the subject among software engineers has hardly been less vague since.

Really, the typical exception-handling mechanism enables a delegator to effectively reconcile an expression of non-compliance in its delegated tasks.

So exception-handling mechanisms usually have several parts. To begin with we have a task or process. Then, there’s a task supervisor. The supervisor is the method that contains the typical try-block. This try-block generally wraps or makes calls to the task. There’s the exception object, which is really the information that describes an occurrence of non-compliance… the excuse, if you will. An instance of an exception object is not the actual exception itself, but merely the expression of the actual exception. Finally, there’s the handler, which is the reconciliation plan that is hosted by the supervisor.

In many languages, supervisor code can have zero or more reconciliation plans, each for a particular class of exception. A handler can also throw or re-throw, meaning a succession of supervisors could potentially be informed of an insurrection.

So it can be said that there are actually zero or more reconciliation plans or handlers… within the program. How do “zero handlers” square with the bit about all exceptions being handled?

We developers easily forget that software is always a sub-process of another task. Software doesn’t run for its own entertainment. Software is social. It collaborates, and blurs the boundaries between processes. Software processes that are not subordinate to other software processes are subordinate to… (I’ll borrow a term) wetware. When software experiences a circumstance that it cannot reconcile, the software will invariably express non-compliance in some fashion to a supervisor. In many cases, the supervisor is appropriately software. In many other cases, the supervisor is wetware.

Regardless… whether the program goes up in a fireball of messages, locks hard, blinks ineffectively, or simply disappears unexpectedly; these acts of insubordination are expressed to wetware. Wetware always has the final reconciliation opportunity. A user can take any and all information he or she has on the subject of the exception to handle that non-compliance.

The user could do a number of things: They could report it to their superior. They could ask someone else to investigate. They could try again. Even if they choose to do nothing, they are making that “informed” decision… and there. That’s what I mean when I say “all exceptions are handled.”

Now don’t worry. I’m certainly not about to advocate unprotected code.

Unfortunately, we developers aren’t trusting types. We go out of our way to protect against exceptions everywhere. We write code in every procedure that secretively skips over processing something if there’s a broken dependency. We write exception handlers that quietly log the details or completely swallow exceptions.

There are occasions when “do nothing” is a reasonable reconciliation. Much of the time, however, we engineers are engaging in illicit cover-up tactics. Handling decisions get co-opted from real process authorities and made at levels where the best interests of the stakeholders are not considered. Often cover-ups add redundant checks that make code harder to understand, maintain, and fix. It actually becomes complexity that exceeds that of proper exception handling.
Where does this get us? You gotta be fair to your stakeholders.

Keeping all stakeholders in mind is critical to building an exception throwing and handling strategy, so let’s dig in on that a bit. The first stakeholder most everyone thinks of in just about any project, is the project sponsor. The business requirements most often need to support the needs of the project sponsor. The end user comes next, and things like client usability come to mind. Sometimes end users fall into categories. If your program is middleware, you might have back-end and front-end systems to communicate exceptions with. In some cases, you not only have to consider the supervisory needs of the end user, but also, potentially, a set of discrete back-end system administrators, each with unique needs.

Remember, however, that during project development, you, the developer, are the first wetware supervisor and stakeholder of your program’s activities. You’re human; you’re going introduce errors in the code. How do you want to find out about them?

Here’s a tip: the further away from the original author that a bug is discovered, the more expensive it is to fix.

You want mistakes to grab you and force you to fix them now. You do not want them to get past you, falsely pass integration tests, and come across to QA, or end users, as a logic flaw in seemingly unrelated code.

Again, in the typical cover-up strategy, dependency-checking code becomes clutter and adds complexity. You may find yourself spending almost as much time debugging dependency checks in cases like this, as actually fixing functional code. Part of a well-designed exception strategy is writing clean code that keeps dependency checking to a minimum.

One way to make that possible is to set up encapsulation boundaries. This is also a good practice for other reasons, including managing security. Top level processes delegate all their activities so they can supervise them. Validate resources and lock them as close to the start of your process as possible, and throw when it fails. Validate data and its schema when and where it enters the process, and throw when it fails. Validate authentication & authorization as soon as you can, and throw when it fails. Once you have your dependencies sanity-checked, clean processing code can begin processing.

Don’t forget that the UI is an edge. Not only should input be validated, but due to important needs of the user, reconciliation code that respects the user should be set in place.

Thrown exceptions, of course, need to be of well-known domain exception types that can be easily identified in the problem domain, and well handled. Don’t go throwing framework exceptions. It will be too easy to confuse a framework exception for an application exception. Some frameworks offer an “ApplicationException” designed for the purpose. I might consider inheriting from ApplicationException. Throwing an “ApplicationException” is probably not unique or descriptive enough within the problem domain to make the code understandable.

By taking an encapsulation boundary approach, lower level tasks cleanly assume their dependencies are valid. If a lower level task fails due to a failed assumption, you’ll know instantly that it’s a bug. You’ll know the bug is either right there on the failure point, or it’s on the edge where the dependency or resource was acquired.

Another important consideration in an exception-handling strategy: code reusability. It often makes sense to decouple a supervisor from its process. In this way, it becomes possible to apply consistent supervision over many different processes. Another interesting possibility that becomes possible in this manner is the idea of code execution with design-time or even run-time configurable supervision. Each different “supervisor” construct could respond to the informational needs of a different set of stakeholders. Finally, handling can be decoupled from supervision. This provides another way to make stakeholder needs configurable.

Ok, so I’m getting carried away a bit there… but you get what I’m saying.

By designing encapsulation boundaries, and by decoupling the various parts of exception handling, you can support effective supervision and reconciliation through each phase of the project life cycle via configuration. In this way, you can honor all the stakeholders during their tenure with the application, and write better software.

[3/12/08 Edit: I originally liked the term “trust boundaries”, because it focused on being able to trust dependencies, and also brought security as a dependency to mind, but “encapsulation boundary” is much more precise. Thanks, Kris!]