Skip to content

The Death of the Technologist

In the world of technology, we focus so heavily on the evolution of the technology itself. New features, new releases, new terminology, methodology, ontology, buzzwords, languages, products, vendors and devices. We hardly ever focus on the changing nature of the people that use it. Who is the modern technologist? If we take the example of the computer, in the first half of the 20th century, the technologist was a mathematician, an academic… a necessarily brilliant mind.

eniac1

It was necessary not just to build the machine, but every component within, from base principles. He or she (and this was really a fair split) had to understand every aspect of the machine – the benefit that the machine brought was simply that once it was going, it could work faster than a team of humans; maybe only just! A visit to Bletchley Park will show you the fulcrum computer (specifically the ‘Turing Bombe’) and by that I mean the precursor to the modern era of IT.

turing-bombe-webThis is the computer; a visceral, living and breathing thing – clattering, whirring, incredibly hot; large like a weaving loom; physical and industrial. You can see and hear every moving part. The point is that there were components. The theory, rules, logic, computation and importantly the usage of the technology are all starting a journey – becoming extracted from the physical engineering; a journey which has never stopped, and will not stop – it continues to this day. The reduction in size of the computer in the late 1960s and early 70s offered a blip in this evolution – putting the technology into the hands of the individual – generally still academic, maths genius, but also interested in electronics and how the new availability of computing power can help them, but also how they can fiddle with the innards!

With the computer arriving on every desk, in every home during the 1980s and 90s and being used as more than a hobby. Often the case stayed firmly closed. The movement to put technology in the hands of non-specialists was gaining momentum. The computer was seen for what it can do, not for what it was in terms of circuit boazx-spectrum_keyboardrds, components. As a child in the 1980s, I remember vividly like so many others the Sinclair ZX Spectrum. I can’t be the only kid who was bought one of these to “help with homework”, but rather it ended up being used solely for the delayed gratification nature of cassette-loaded games and the very delayed gratification of copying BASIC programs out of magazines. Even the non-techy kids enjoyed their new-found ability to create something, to make a small blob move in a rudimentary way across a grainy TV screen. Most of these kids didn’t go into software-related careers, their enjoyment was in creating something not in ‘the code’ in itself. The interface to the technology was still very ‘techie’, but the beginnings of more real and useful ways to use it were showing themselves.

Computing becomes commonplace over the ensuing years. Jobs start requiring and expecting office application skills, more advanced computers exist at home for personal accounting, cataloguing your music collection, printing things out and writing letters. Also around this time mobile telephones are borwin31n. Computing technology becomes more common because of what it can do, how it can help people in their day to day lives. Getting involved in the dirtier aspects such as re-installing operating systems, formatting hard drives, the trials and tribulations of the ubiquitous floppy disk, installing applications, “windows cannot find your printer”… these were all necessary evils to the computer user rather than the main event. If you want the benefits you have to take some of the pain – and users did in their droves, because that was what was required. Then something interesting happened – the internet suddenly appeared. Well, by the timedkmb86g_487pr55s2hc_b most people had heard of it, it had been there for quite a long time, but as very much a techie concept, a niche pursuit… the extension of the bulletin board, newsgroup, prestel and so on. The birth of the internet, HTML, browsers… it’s all been covered at length so I won’t go into that here. Suffice to say, given a platform to create anything, essentially for free, to show it to anyone who might want to see it… for people to find it when they didn’t know it was there… it was never about the internet, it was about what you see in your browser. The possibilities are endless, limitless, and it evolves every day. Have a play with the ‘wayback machine’ if you don’t believe how much the art of web content has come along in 20 years… even 5 years.

If all you need is a browser – a simple, transparent window to a world- then can’t you do away with everything else you don’t need? If everything you need is delivered to you without the perceived ‘evil’ of snake-charming an awkward  computer prone to prolonged sulks anearly-Amazond unexplained problems? For most of the world, of course. Even as I type to you as a dinosaur technologist; I type this on a Macbook Air – because it’s easy, quick and it works. I spent 5 years with Ubuntu linux as my only operating system, and whilst it has its place and I will remember it dearly… I pick my battles, and this is not one. So the technology that just works, that sits in your hand wherever youare, wirelessly connected to that unrelentingly creative global and plain huge Andy Warhol studio that is the internet, hiding away its components so neatly, an interface to ergonomic and intuitive that evespeech110ry parent has a story about their 2 year old being able to use it… why would you go back? You could call this the consumerisation of IT, but it’s just this – do you want to do it the easy way or the hard way?

The most recent change has been the consumerisation of technology creativity. We are starting to see a lesser need to know how to code in order to create, assemble, improve and deliver technology. This relies on every increasing complexity underneath, but for the bulk of people making use of these technologies, the way that they go about doing so speaks to them… it’s closer to the end result, feedback is direct and human, the device and the means to create just fits to you.

iPhone-5S

Much of my interest in technology is the interface between the human and the machine. Researchers refer to this as Human Factors, or Ergonomics and it is most definitely an art, but grounded in science. We’ve just got better at fitting machines to people, but this also changes the way we behave.

So, we come to the end. There is an eroding need to be a technologist to be involved with technology. This creates a bit of a paradox, as on the one hand IT becomes ‘cool’ and no longer the perceived province of the Geek; but on the other hand you don’t need IT skills to do basic things anymore, and even reas

Creativity_Nithyananda

onably advanced and creative things. Obviously technology won’t turn you into the next Mozart, van Goghor Bill Gates without a degree of talent, but no longer are you required to have a talent for computing first. The technologist isn’t really dead (of course!), and instead their job has become quite a lot more varied and tough.

google-glass-wallpaper-hd2

Some incredibly hard work goes into making the average person feel expert, reducing the barriers to entry. The role of the technologist has changed hugely however, and moves away from the purely academic to the incredibly applied and results oriented. The new task is becoming the interface between the high-tech and the ‘everything else’; and the tech needs to fit right in at that party. I heard a presentation yesterday about the ‘no clue’ customer, referring to the emerging trend for stakeholders outside of the IT department engaging with technology providers to create new services, but with little knowledge or skills in IT themselves. So, business imitates life; and long live the technologist!

Consumerisation of IT? There, dragons be… OR Creative Control vs Controlled Creativity

There is much talk about the ‘consumerisation of IT’. That is, the more pervasive and more friendly, useful nature of today’s technology means that you no longer have to be MIT postgrad to operate it; nor in fact a technical professional or even an ‘geek’. Devices are now so intuitive that toddlers can pick up an iPhone and use it before they can read or write. Incredibly complicated software applications can now be used by everyday people, because all that complexity is obscured behind well-designed, simple and usable interfaces. Complexity and intelligence is advancing and progressing at one end, and at the other we now understand more about how this can be harnessed by humans than ever before, and this too is advancing at a rapid rate. The gap between the technology and how it manifests itself is widening every day. You can all call it “MacPherson’s Gulf” if you like 🙂

Clever packaging, desirability and the ‘death of boredom’ are also contributing factors. I am sure I don’t need to mention the fruity tech company worth more than many nations’ GDP; you already know who they are, and that’s exactly the point. Finally, and most surprising is that my Mum now owns and enjoys using a tablet PC. She hasn’t asked me for help even once, whereas I am sure we are all familiar with the ‘free family member technical support’ we all are obliged to offer to when working in IT, in the era of the PC. Which, I reckon, pretty much ended this year.

This has all be done to death in the tech press – the reason I mention it is because it throws up issues which are less well-covered. Now that non-techies love devices, they’re getting involved in other previously geek-only activities with a relatively high barrier to entry. Developing apps – lots of people are getting involved and there are toolkits to make this as easy as it has ever been. AppStore type marketplaces mean that you can take an app to market with no real investment, charge for it and make money. The consumerisation aspect means that you can even get away with having far less skill. A good thing right?

WRONG

Well, it’s partly a good thing, but IT is basically complex. Good design isn’t just about what you can see (in fact, most of it is what you can’t see as a user) and developing in a fast-paced agile manner doesn’t mean you take the path of least resistance when developing apps and services. Quite the opposite. Now, when we are talking consumer apps that turn the screen light on to be used as a torch, or let you change the colour of the LED when you get a new email, who really cares? Well, the customers care, but they can vote with their feet. The real issue is when we apply a gung-ho goldrush mentality to developing a mobility solution or capability in your business. We wouldn’t just ‘knock up’ a line-of-business application to run on a desktop workstation, so why on earth would we do that for a tablet or smartphone?

Technology that harnesses the creativity whilst still enforcing good governance and design will be the platform for success. It might be easy to create a cool app to show the CTO that we can see real time customer order information on an iPad, but that doesn’t mean we should show it to anyone else until we’ve done it properly! I’m sure there will be some high-profile hiccups along the way when businesses start really ramping up the service mobility aspect. The uptake hasn’t kept pace with device innovation so far, I can only assume there is some hanging-back; not wanting to be a trail-blazer.

Boundaries are important, but we need to be sure to engender creativity and freshness, without stifling it with process and paperwork. I guess it’s like being a good parent. We shouldn’t try to be ‘down with the kids’; should set clear house rules, but at the same time let them have the keys to the car every now and again for a data-date down at the drive-in…

Automation, feedback loops and the redundant human

AKA Who watches the watchmen?
OR Dirty deeds done dirt cheap
OR Sit back and enjoy the ride?

I’ve been to a number of analytics, big data, smarter planet (IBM’s initiative to make the world a better place using technology) and automation presentations in recent times, and I wonder whether there is a chord un-struck with most of those involved in the fast pace of progress. The attention is focused on the technology; how cool and smart it is, how much better it is than people and their error-prone fat fingers and sleepy attention span. I wasn’t always a technologist, and in fact my undergraduate degree was in Psychology. I have the same background as Mark Zuckerberg of Facebook infamy, but please don’t get us confused – I have different hair.

The last ‘AKA’ title up top was the subtitle to my undergrad thesis, which described a phenomenon known as the “out-of-loop performance problem”: an inherent and very human issue in automation, brought about due to the changing nature of human involvement in sophisticated and complex systems such as cars, planes and nuclear reactors; as they fast move towards closed feedback loops of many systems of systems. IT is fast moving towards the same human problem, and here’s why.

The technology emerging today is incredibly exciting and the time we live in is so poignant. What I mean by that is that many different streams of technology are coming together – and I don’t mean converged devices or the consumerisation of IT per se – but that we have emerging clusters of expertise. Whereas previously we had a whirring box that did everything (the jack of all trades), now we have defined and segmented architectures that break out of the enterprise, like a living, breathing entity. Applications have emerged that are optimised for providing one specific service incredibly efficiently and effectively, and the capability to integrate these applications together, and to integrate applications to underlying layers now means that we are building a super-intelligent mesh; a matrix of information.

We’ve enabled it – by architectural segmentation, partnering, outsourcing, SOA, BPM, cloud, obviously the internet at the core of it all. Software-defined networks mean that we can even morph and change virtual connectivity circuits in a relatively touchless manner. We talk of round-tripping from data capture, analysis, interpretation, action and back around – these are essentially and fundamentally just closed feedback loops. Smarter Planet relies on this premise – that we have pervasive intelligent devices capturing vast amounts of data for us to harvest, analyse and act upon. More and more information, more accurate models, greater insight, better decisions – and now we have all the pieces of the puzzle to start making that a closed-loop reality! The architecture is rapidly becoming an automaton. Not one that can quite take over the world or express a genuine desire for your clothes and your motorcycle, but still… so my main warning surrounds how this changes the role of people.

The fact is – and this isn’t meant to be an Eeyore view of the world – things go wrong. Stuff breaks. Models have flaws. Software has bugs (very, very occasionally 🙂 ). Nothing is 100% reliable. Even those systems conceived to monitor others can themselves break. How often do you hear of someone taking their car in to a garage because a red light came on, only to be told it was a faulty sensor?

So if the watchmen have flaws, who watches the watchmen? Just as the changing role of the IT Manager is now one of supplier management; the changing role of the ops engineer means that instead of watching and receiving alerts on simple metrics; the watching is now a considerably more complex task.

Humans make terrible monitors. We are truly rubbish at it. We get bored, distracted, fall asleep, chat, think about dinner… and we miss stuff. Important stuff. So we need to be aware of the automated changes being made, trace the reasons for those changes, view information in ways that directs us to the most important first; correlates information relating to one ‘thing’ in the real world that we can make sense of. Watching log files roll by is not an option. How to present information in the right way? Technology is edging into the science of ergonomics. How do you govern a semi-automated system? How about a system of systems? How about if the boundaries of that system span the globe and out of the walls of your office, or the safe, familiar and friendly ‘vendor of choice’ boxes in your datacentre?

Did you think I’d have all the answers? Maybe I do, but I’m keeping them to myself for now! Or maybe I don’t… but one thing’s for sure – Governance is still a human concept, in a world of high technology. When the machines become self-aware, we better be sure we had that automated fallback policy backed-up… we’d never need it they said… it’s all just dumb metal and silicone they said……..

Social Business

Two words that some – the more traditional amongst us – say should never be seen together. Fact is, business is a social activity. In a business transaction, there are always at least 2 parties, and consequently most of the time at least 2 people. Can a person honestly say that they never collaborate? Collaboration is key to achieving our objectives, as no man is an island. We have hiearchies, rivalries, friendships, peers, contacts, business card holders, phone books, twitter followers/followees, emails overflowing out of the gaps in our keyboards, phones glued to our ears, networking events, business lunches, dinners, Kick-offs, wrap parties and the golf links!! How can anyone say business is not social? Interestingly, even as business becomes more and more about technology, the social aspect is a very manual, very low-fi activity. Some would argue that this has to be the case; but look at the personal world we operate in outside of work (you do have one, don’t you? Or is the firey red incessant blinking blackberry message alert just too much to bear?!). Facebook, all manner of apps, flickr, forums, blogs, tags, shares, user generated content all point to a movement towards using technology to change the way we interact with each other. If it was technology for technology’s sake, then how would one explain the proliferation of consumer IT? ‘Normal’ people now care about IT! Why? It has to be more than the glitzy accessory packaging. People get something from it. Bring-your-own-device movements are an acknowledgement that people want to work in more advanced and friendly ways. That said, we collaborate and interact rather differently at work than in our personal lives, but there is a great deal of overlap. When you have a distributed business (just like you probably have distributed friends) you need to communicate effectively. Is there a reason why we no longer see rafts of circular forward emails in our inboxes from our mates? Well yes, because they don’t get read, and people hate them. So why do it in business? Why do people like using chat programs instead of email for some things? I even spoke to someone just yesterday who described IBM SameTime as “utterly essential”! We need both structured and more free-form collaboration; as creative and innovative collaboration cannot be constrained. Structured collaboration (you could described these as business processes or workflows) are equally important. This way you get rid of the ‘owner’ mentality, and it’s necessary human single point of failure; and the erratic and unpredictable results this brings. We have the technology to subscribe to, contribute to, interact with and influence the topics that are of interest to us; on our terms. We should engender a culture of collaboration in our workforces – the technology alone is not enough. It is often said that if you keep doing the same things, the same way you always did, then you will get what you always got. I think it is far more than that: you will struggle more and more to get even the same results. Go forth and collaborate!!

Integration Innovation: the future!

OK so leading on from yesterday, how should we integrate in future? Well hopefully from the history lesson you can see that problems breed innovation, so to predict the future we need to look at the problems we experience today. SOA is great, but the time-to-value (TTV) can be long, and the implementation can be risk-laden. The impetus needs to come from the top. If the C-level isn’t bought in, it won’t fly. SOA only works when it is architected as part of an EA strategy, or something similar. You really need to understand the business, data, and the current technology and applications portfolio, and where the business wants to go in future in order to make it work. The concept of SOA means that you invest effort up front to be agile later on. Sometimes this (or rather, the investment) just does not ring true with customers, which is fine; there will always be customers who do not want the rolls royce solution when all they really need is a scooter. SOA is not a solution for a company who just want application A to talk to application B, and sometimes you can get a huge amount of value from a relatively small but meaningful single integration. More on that in a later blog entry!

SOA, whilst a great innovation, still requires binding between consumer and provider (what used to be called client and server – the reason we changed this is because the notion of client and server had just become roles, with a given system sometimes a client and sometimes a server). We still have a relatively inflexible structure, that whilst it is more agile than before, and certainly more agile than a point-to-point architecture communication model, still forces consumers to ask providers for information, and in the format they can cope with. There might be a hundred consumers all competing for the attention of the provider.

Wouldn’t it be nice if providers could just tell the consumers when things happen that they might be interested in? As architectures become more node-intensive (such as the mobile networks) it is far more sensible to loosely bind consumers and providers. Rather than having tight binding between application A and application B, or device A and server B (and the maintenance that goes with managing that relationship) invert the control from consumer to provider. Providers broadcast updates, and candidate consumers ‘listen in’, using the contextual information contained within each update to determine its relevance. Providers need not just be software applications, and the dynamic nature of consumption may not require every update to be captured and handled, perhaps only trends and patterns in update aggregates (see IBM InfoSphere Streams for some really cool stuff here). This model suits well for situations where there are massive numbers of consumers or providers (or indeed both), or for where individuals may come into and disappear out of existence dynamically.

The integration model closest to this currently is eventing, although as yet we do not have a well-adopted common and open schema for event format. Common Event Infrastructure (CEI) is moving towards this, and works well for IBM software applications (WebSphere, Tivoli) and some devices, but this is an IBM proprietary format.

Future application of this concept could include nanotech, where myriad nano-devices could behave as providers or consumers of volumes of information in small payloads; or perhaps WANs with in-field devices similarly acting as providers and consumers as appropriate. In fact this is not future, it is happening now. A major european city mass transit system already uses this model for tracking and dashboarding dynamic information about its rolling stock in transit; and there is some very interesting work going on at IBM Hursley with the WebSphere micro technologies.

More soon 🙂

Integration Innovation: a history

Integration between disparate, heterogenous systems is pretty complicated. Like anything else in IT, maturity in this area has progressed through phases. First we had disconnected integration, where one system would leave a message, and another would pick it up. This was non-transactional and batchy, but most importantly data format became a concern, although we didn’t have to worry about things like protocols of transmission, because messages were not transmitted from system to system, and required an intermediate location such as an FTP site, or a filesystem. Ultimately, it was the batchiness of this integration technique that caused the impetus to innovate. The notion that we could transmit data from system to system in ‘real-time’ (or close to, and certainly better than batch) was hugely attractive – but how to do this when all systems are different, use different platforms, programming languages, data constructs and so on?

Web Services rapidly emerged to fill this gap, and I remember writing an academic paper on its advent in the mid-90s, not really that long after the web started to gain public popularity. Web services allowed us to communicate using an emerging open standard, using a platform independent data model (although necessarily basic) over HTTP. Web service the concept is often confused with the benefit that could be afforded it – most people were talking web services when what they really meant was something akin to a forerunner to Service Oriented Architecture (SOA), but I will get to that later. So all our systems communicate with each other using synchronous web service calls – great…. actually not that great! The more systems we have, the more un-reusable work we have to conduct to get them to talk; but the more success we create, the more value we realise and the more systems we need to integrate. This cycle meant that we rapidly ended up with a vast spaghetti architecture communication model, and a greater homegrown integration codebase (with all the operational and maintenance work that goes with that) than anything else. So we go back to basics, with an intermediate location or system to simplify the number of integrations we need to write.

Enterprise Application Integration (EAI) and later the Enterprise Service Bus (ESB) patterns (they are very similar) do just that – allowing us to reduce the number of connections we need to write because each system only need communicate with the ESB, rather than every other system it might need to talk to. The ESB can intelligently route requests, mutate data as it travels, enforce compliance, quality, security and other NFRs, and the ESB at the centre of the overall enterprise architecture, is ideally positioned to keep an eye on everything, for monitoring, metrics and so on. We aren’t even mandated to use a given protocol or data format as the modern ESB is ‘aware’ of lots of them. Sounds great so far? Well, again we are the victim of our own success! With the promise of integration allowing seamless data flow with a given architecture, what about when things change? One system gets replaced by another and we then need to undertake a lot of migration work both on and off the ESB. Also what about partnering? How can other businesses communicate with us, and us with them more effectively and meaningfully?

SOA again, had been around for a little longer, but again the growing business imperative of finding a solution to the above issues fuelled its growth in uptake. The key notion of SOA is that you create a brittle architecture by getting vendor A system talking to vendor B system whether using an ESB or not; and so we integrate business services afforded by IT applications and packages, not the applications and packages themselves. Thus, we integrate CRM and ERP, with vanilla easy to understand interfaces, rather than SuperWhizz CRM with HugeComplicated ERP using complicated and proprietary APIs. This way we can break free of any one-to-one mapping between application and business service – some business services may be realised in a combination of several systems and databases. SOA acts as a proxy to IT estate, making it easy to understand and therefore easier to integrate; and aligning business and IT more effectively.

So what does the future hold? Watch this space….