Tuesday, May 20, 2008
This senior VP talks about how Satyam and the IT industry is responding to new challenges.
One thing that stands out to me is the statement that they are moving from services to solutions. They make the implication that they are rebuilding or reprogramming businesses at the workflow / process level. They appear to be successfully applying technology build-out as a commodity service while implementing their solutions... Sounds like they're treating the enterprise as a sort of programmable platform, like SOA / BPM on a grand scale.
From the article:
"A solutions provider transforms business. The difference in business will happen when we change those business processes as well. That is where we are bringing in business transformation solutions — process optimisation, process reengineering, etc. "
My intuition doesn't quite square with Satyam's vision.
Lots of things have been pointing towards more innovation in the top layers of applications, built on a very stable technology base. To me, it still feels like there's an unspoken motivation for that: business leadership wants IT folks to make ruggedized app dev tools and hand them over to power users (and/or process owners). Business leaders want IT to get the C# off their business processes.
All of that is sorta where I started cooking up the hypothesis of metaware from.
I'm curious to know how Satyam's vision is really working. I guess we'll know in a few years.
‘Moving towards IP-led revenues’
Sunday, May 18, 2008
I looked up the term "Art" in the dictionary. The first definition is:
- the quality, production, expression, or realm, according to aesthetic principles, of what is beautiful, appealing, or of more than ordinary significance.
For me, regarding coding, it's a matter of remembering a few points:
- implementation is expression
- significance is subjective
- beauty is in the eye of the beholder
So code can be expressed, fundamentally, in a bunch of ways:
- etc... ?
Simple, clever, elegant, seemingly natural expressions of all kinds are typically beautiful to a programmer, when they function correctly.
Of course, to me, the most beautiful implementations are implementations that elegantly express its business in a way that's very clear to anyone familiar with the problem domain at that abstraction level, and to the target platform(s).
politechnosis: Art & Science
Saturday, May 17, 2008
Reminds me of a quote from The Matrix movies...
Cypher [to Neo]: "I don't even see the code. All I see is blonde, brunette, red-head." :)
It's not quite like that, but you get the point. There's gotta be a back-story behind the witty writing. I suspect it has something to do with a programmer appreciating particularly elegant solutions.
One of the hard parts about knowing that programming is an artful craft is being forced to write artless code. It happens all the time. Risks get in the way... a risk of going over budget, blowing the schedule, adding complexity, breaking something else.
It all builds up. The reality is, as much as we software implementers really want application development to be an art, our business sponsors really want it to be a defined process.
The good news for programmers is that every application is a custom application.
It really sucks when you're surgically injecting a single new business rule into an existing, ancient system.
This is the case with one of my current clients. At every corner, there's a constraint limiting me. One false move, and whole subsystems could fail... I have such limited visibility into those subsystems, I won't know until after I deploy to their QA systems and let them discover it. If I ask for more visibility, we risk scope creep. The risks pile up, force my hand, and I end up pushed into a very tightly confined implementation. The end result is awkward, at best. It's arguably even more unmaintainable.
These are the types of projects that remind me to appreciate those snips of inspirational code.
Don't get me wrong. I'm happy there's a fitting solution within scope at all. I'm very happy that the client's happy... the project's under budget and ahead of schedule.
The "fun" in this case, has been facing the Class 5 rapids, and finding that one navigable path to a solution.
politechnosis: Art & Science
Saturday, May 10, 2008
This question, Art vs. Science, has come up a million times in software development circles. Reading Paul Johnson's (Paul's Pontifications) blog post, in conjunction with a discussion in the Tech Mill at Edgewater, (thanks, Jason!) I have come to see that art and science are not as opposite as I once viewed them to be.
What hit me was that Paul makes the statement that there's no process to implementing software. I still disagree. There are many processes.
The number of processes that an implementer can choose from to write his/her code is often vast, and depends on the problem set. A problem set includes many things, including requirements, tools, target platform, development platform, existing code, and even the implementer's mood and frame of mind. That is what makes implementing code, like painting, or creating a recipe, an art.
Within a common implementation problem set, there can be a large number of processes which can be applied to derive valid solutions. In fact, there are so many, that some distinct processes may actually render the very same code. So, to be more clear, it's not that there's no process... it's that there's no single valid process.
Knowing that there's no one single valid process doesn't mean that we can't pick a needle from the haystack... if the process produces a solution within the problem set, it's good.
Now consider what happens when you start to narrow a problem set. There's lots of things you can do. Frameworks, platforms, clear-specific requirements, best practices, coding standards, well structured architectures... these things are all factors that limit the problem set. By narrowing a problem set, you narrow the number of valid processes. By narrowing the number of valid processes that a developer can choose from, lots of interesting things start to happen. You achieve more predictable results, and are more likely to achieve repeatable schedules... and you reduce overall project risk.
This is what's so interesting about contemporary trends in software development, such as Ruby on Rails... use of these tools narrows problem sets that developers face. This means the implementer can spend less time figuring out where the blanks are, and more time filling them.
Now let's take this further. What happens when you reduce the problem set dramatically...? Take a single, relatively well known problem, on a very specific platform, using a very small set of unambiguous expressions. You get a very tightly defined process. By doing this, you wring the art out of creating something, to the point where it becomes machinable. The process becomes realized as a factory.
So to answer the question... Art or Science?
It's a trick question... art and science are not exclusive opposites. Art is about freedom to choose your creative process. Science is about knowing what processes are available, and the pros and cons of each. So programming, like all creative activities, is usually art (except in single-processed cases), and usually science (except in cases of serendipity and true miracles).
Paul's Pontifications: An Under-Appreciated Fact: We Don't Know How We Program
Thursday, May 8, 2008
Multi-point touch screen systems are starting to take shape out of the ether, and it really feels like it's going to usher in a new era of computing. We've talked about a few of them here in the Tech Mill. It's "Minority Report" without the goofy VR glove.
Microsoft's offering in this arena is Surface (formerly "Milan").( http://www.microsoft.com/surface )
From available marketing materials, Surface is much like the other offerings that are under development, with a few interesting differences. Rather than being an interactive "wall", it's a "table". In addition to interacting to a broad range of touch-based gestures, Surface also interacts with objects. Some of it's marketed use-cases involve direct interaction with smartphone, media, and storage devices.
This week, I'm on a training assignment in New Jersey, but within a bus ride to one of very few instances of Surface "in the wild".
I made it a secondary objective to hit one of the AT&T stores in mid-town Manhattan.
I had a lot of high expectations for it, so actually getting to play a bit with it struck me as a touch anti-climactic. The UI was great, but it was clear they cut costs on hardware a bit: responsiveness wasn't quite as smooth as the web demos. It did impress me with the physics modeling of the touch gestures... dragging "cards" around the table with one finger mimicked the behavior of a physical card, pivoting around the un-centered touch point as a real one would.
I was also a bit concerned that the security devices attached to the cell phones they had around the table were some sort of transponder to hide "vapor-ware" special effects. My own phone (an HTC Mogul by Sprint) was ignored when I placed it on the table.
All in all, I was happy to finally get to play with it. Between technology advances and price drops, this UI paradigm will start to make it into the power business user's desk.
I mean, can you imagine, for example, cube analysis.... data mining... report drilling... and then with a few gestures, you transform the results into charts and graphs... then throw those into a folder on your mobile storage / pda device...
I'm still loving the idea of interactivity between physical and virtual (and/or remote) logical constructs...
Imagine bringing up the file server and your laptop on a "Surface" UI, and litterally loading it with data and installing software with the wave of your hand....
Having a portable "PDA" device with "big" storage... in fact, big enough to contain a virtual PC image... In stand-alone mode, the PDA runs the VPC in a "smart display" UI. When you set it on a Surface, the whole VPC sinks into it. You get access to all the Surface functional resources including horsepower, connectivity, additional storage, and the multi-touch UI while the PDA is in contact. When you're done, the VPC transfers back to the PDA, and you can take it to the next Surface UI in your room at the hotel, or the board room (which has one giant "Surface" as the board room table.)
The preview is over at AT&T today. According to Wikipedia, Microsoft expects they can get these down to consumer price ranges by 2010 (two years!).
Sunday, May 4, 2008
The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software
Its tagline: "The biggest sea change in software development since the OO revolution is knocking at the door, and its name is Concurrency."
Mr. Sutter's article suggests that because CPUs are now forced to improve performance through multi-core architectures, applications will need to typically employ multi-threading to gain performance improvements on newer hardware. He made a great argument. I remember getting excited enough to bring up the idea to my team at the time.
There are a number of reasons why the tag line and most of its supporting arguments appeared to fail, and in retrospect, could have been predicted.
So in today's age of multi-core processing, where application performance gains necessarily come from improved hardware throughput, why does it still feel like we're getting a free lunch?
To some extent, Herb was right. I mean, really, a lot of applications, by themselves, are not getting as much out of their host hardware as they could.
Before and since this article, I've written multi-threaded application code for several purposes. Each time, the threading was in UI code. The most common reason for it: to monitor extra-process activities without blocking the UI message pump. Yes, that's right... In my experience, the most common reason for multi-threading is, essentially, to allow the UI message pump to keep pumping while waiting for… something else.
But many applications really have experienced significant performance improvements in multi-processor / multi-core systems, and no additional application code was written, changed, or even re-compiled to make that happen.
- Reduced inter-process contention for processor time
- Client-server architectures (even when co-hosted, due to the above)
- Multi-threaded software frameworks
- Improved supporting hardware frameworks
The key is multi-processing, though, rather than multi-threading. Given that CPU time is a resource that must be shared, having more CPUs means less scheduling collision, less single-CPU context switching.
Many architectures are already inherent multi-processors. A client-server or n-tier system is generally already running on a minimum of two separate processes. In a typical web architecture, with an enterprise-grade DBMS, not only do you have built-in “free” multi-processing, but you also have at least some built-in, “free” multi-threading.
Something else that developers don’t seem to have noticed much is that some frameworks are inherently multi-threaded. For example the Microsoft Windows Presentation Foundation, a general GUI framework, does a lot of its rendering on separate threads. By simply building a GUI in WPF, your client application can start to take advantage of the additional CPUs, and the program author might not even be aware of it. Learning a framework like WPF isn’t exactly free, but typically, you’re not using that framework for the multi-threading features. Multi-threading, in that case, is a nice “cheap” benefit.
When it comes down to it, though, the biggest bottlenecks in hardware are not the processor, anyway. The front-side bus is the front-line to the CPU, and it typically can’t keep a single CPU’s working set fresh. Give it a team of CPUs to feed, and things get pretty hopeless pretty quick. (HyperTransport and QuickPath will change this, but only to the extent of pushing the bottle necks a little further away from the processors.)
So to re-cap, to date, the reason we haven’t seen a sea change in application software development is because we’re already leveraging multiple processors in many ways other than multi-threading. Further, multi-threading options have been largely abstracted away from application developers via mechanisms like application hosting, database management, and frameworks.
With things like HyperTransport (AMD’s baby) and QuickPath (Intel’s), will application developers really have to start worrying about intra-process concurrency?
I throw this one back to the Great Commandment… risk management. The best way to manage the risk of intra-process concurrency (threading) is to simply avoid it as much as possible. Continuing to use the above mentioned techniques, we let the 800-lb gorillas do the heavy lifting. We avoid struggling with race conditions and deadlocks.
When concurrent processing must be done, interestingly, the best way to branch off a thread is to treat it as if it were a separate process. Even the .NET Framework 2.0 has some nice threading mechanisms that make this easy. If there are low communications needs, consider actually forking a new process, rather than multi-threading.
In conclusion, the lunch may not always be free, but a good engineer should look for it, anyway. Concurrency is, and will always be an issue, but multi-core processors were not the event that sparked that evolution.