Tuesday 5 December 2017

Some Trust in Models! On Simulations, HyperNormalisation and Distorted Reality in Software Teams (Part 1)


Still from Adam Curtis' 2016 documentary Hypernormalisation


Software is all about abstraction. We take an existing or desired process, model it and codify that model into a set of instructions and data that computers can read and follow.

We come up with different processes, methodologies, notional lifecycles, workflows, test strategies, measurements and quality controls to steer our immensely difficult projects to a satisfactory conclusion. For that we get teams of people with different roles and skills to work together, manage the creation and deployment of code with its own complexities and dependencies and implement what (we believe) users want is incredibly hard.

The processes we use to create software are also abstractions. The problems we face are, how much do the abstract requirements, process and methodologies we implement really reflect a desired real world outcome? In the midst of a software project, how do we maintain the link between abstraction and reality? What happens when a software team loses the ability to know about the difference between simply fulfilling the process and getting to a real world product people actually want? More dauntingly, at what point will the team stop caring?


Everything Was Forever Until It Was No More


A stamp featuring Pimenov's "Wedding on a Tomorrow Street", 1973

Anthropology and Sociology provide some interesting allegories that within limits help us to look at the questions above. The Russian academic Alexei Yurchak in his 2006 book "Everything Was Forever Until It Was No More: The Last Soviet Generation" discussed the paradoxical nature of life in the 70s and 80s Soviet Union. People were realising the disconnect between the ideological and propagandistic announcements of a Soviet state that saw itself as successful and immutable ("the end of history" as Marx envisaged) and the increasingly obvious fact that its economy was stagnant, its institutions were failing, events such as the War in Afghanistan were hurting its collective psyche and their quality of life was getting worse by the day. However in the absence of an alternative, whilst they understood something wasn't quite right they remained dissonant and carried on with the pretense that nothing was wrong, deluding themselves into a self-fulfilling prophecy. All the while the national discourse was piling more upon itself and becoming even more repetitive, in unwavering copying and reinforcement of earlier dogma. When the Soviet Union did collapse so quickly and dramatically, its disintegration was shocking to most Soviet citizens who had seen it as eternal. Yurchak coined this condition "Hypernormalisation".

The disconnect between the official narrative and the reality of life in the final decades of the Soviet Union gave rise to artistic protests of a satirical nature, with films such as Andrei Tarkovsky's "Stalker" describing a world that was outwardly similar to ours and could fulfill all our dreams but at a deeper level profoundly different and nightmarish.

Yurchak's book had a profound influence on the English BBC journalist and filmmaker Adam Curtis. His impressive 2016 documentary "HyperNormalisation" expressed the belief that this had been extended to the whole world. In his view, from the 1970s onwards our leaders and media figures had grown tired of struggling to articulate and tackle the complex problems of the real world, resigned themselves to managing the current order and predicting risks and created a "fake world" which trivialised long-existent complex social, economic and political issues into half-baked narratives of "Good vs Evil" with misplaced or contrived enemies. Instead of challenging themselves and the narratives told to them, people turned to virtual worlds in cyberspace that were designed to be free but had powerful hierarchies of their own - harvesting the information and activities of billions of people and and filtering the media people saw to reinforce instead of challenging their opinions.

Consequently people either grew to be dissonant of the "fake" perception-managed world and the real world, accepted the narrative as fact and opted into solipsism and self-focused activities or used the internet to create huge protests and revolutions. Those who retreated became susceptible to agents who were skilled in maintaining confusion and instability - the film uses the examples of then-candidate Donald Trump and Vladimir Putin's strategist Vladislav Surkov - turned politics and news into bewildering theatre such that facts and truth no longer became politically relevant and rumours and chaos reigned. This left them much more open towards the status quo (as in Putin's Russia) or demagoguery (as in the 2016 US Elections).

For those who revolted, in the absence of a compelling alternative direction of society these revolutions failed tragically and spectacularly, such as the protests in Egypt that were hijacked by the Muslim Brotherhood, eventually to be deposed by the military, or the Syrian Civil War that led to the rise of ISIS.


The Road to Disneyland. Simulation and Simulacra.


Pluto, Dale and Daffy Duck on a firetruck, Disneyland Anaheim, 2010

Both of the narratives above borrow from postmodernism, particularly semiotics (the study of communication with and assigning meanings to symbols and signs). The French sociologist Jean Baudrillard in his controversial 1981 treatise "Simulacra and Simulation" described the process where human culture progressively replaces our direct experience of reality with a series of signs and symbols (which he refers to as "simulations") - copies and representations of the real world with a seductive quality.

Over time the simulation is tweaked and copied to the point where it deviates from the original to the point where it is a (in his words) an "evil appearance—it is of the order of maleficence". This continues to the point where any relationship with the original reality is semantic and arbitrary at best.

The end point of this, which he calls a "simulacrum", is a simulation or copy that has been abstracted to the point where it has no relationship to an original at all - only to other signs that may themselves be simulacra. Our life experience of the simulacrum, not being based on any grounded physical object, is not even acknowledged or cared to be based on reality - yet it becomes a “truth" in its own right. This artificial existence is known as "Hyperreality".

The most common example stated, referred to by Baudrillard and Italian philosopher Umberto Eco, is Disneyland. The facades of Main Street with its allusion to an idealised past create a facade that appeals to our imaginations and desires. The Big Thunder Mountain ride is an exciting allusion to old mining rail tracks in a cowboy setting. Nevertheless, Main Street as a real street never existed and mining rail tracks of bygone times were not anything remotely like Big Thunder Mountain. Of course, Disneyland is so seductive that none of this matters. We engage with the fantasy world as if it were real. This is the definition of the Hyperreal.

Other stated examples of simulacra include historical fallacies that have been promoted widespread as facts, artificial substitutes for human interaction such as sex dolls and AI chatbots, films with digitally created CGI characters and backgrounds and "structured" reality TV shows.

A Postmodern Perspective on Software Delivery


I came across the work of Baudrillard along with Adam Curtis' "HyperNormalisation" at about the same time in 2016, after about nine years of working as a tester. Whilst drawing conclusions from ideas from sociology and anthropology applied to software teams without thought is a flawed and dangerous activity, I believe that they provide a lens through which our inability to deal with the inherent contradictions behind key concepts in software engineering can result in software teams falling into flawed and misused practices - and why even if they are suspected to be flawed and misused, teams fall into dissonance and fail to act right up to product catastrophe.

For this I offer perspectives on four concepts critical to software engineering and testing - quality, measurement, requirements and methodology/process.


Quality - Relative at Best, Simulacrum at Worst


The usual aim of a software team is to deliver a "quality product", however Quality is a vague and shifting target. Gerald Weinberg, seen as the "Godfather of Agile", wrote in his 2012 blog article "Agile and the Definition of Quality" on the "Relativity of Quality' - "what is adequate quality to one person may be inadequate quality to another." He states the following -

"If you examine various definitions of quality, you will always find this relativity. You may have to examine with care, though, for the relativity is often hidden, or at best, implicit.

Take for example Crosby's definition:

"Quality is meeting requirements."

Unless your requirements come directly from heaven (as some developers seem to think), a more precise statement would be:

"Quality is meeting some person's requirements."

For each different person, the same product will generally have different "quality," "

He list various statements defining quality from the point of view of specific project stakeholders.


"

a. "Zero defects is high quality."

1. to a user such as a surgeon whose work would be disturbed by those defects

2. to a manager who would be criticized for those defects


b. "Lots of features is high quality."

1. to users whose work can use those features–if they know about them

2. to marketers who believe that features sell products


c. "Elegant coding is high quality."

1. to developers who place a high value on the opinions of their peers

2. to professors of computer science who enjoy elegance

"

etc...


His ultimate definition of Quality - that it is "Value to Some Person" - has consequences to agile teams. He states that the definition of quality is "political and emotional" and thus leads to decisions on whose opinions count most. Also these decisions can be less than rational and are hidden from public view.

It is my assertion that in cases where not enough thought has been given to grappling with the contradictions above, a team’s conception of “Quality” has little in relation on the reality of the product (to the extent that the reality can be defined or measured) or the aims of the project. Quality as a symbol or concept becomes nothing more than the biased opinion or dogma of some powerful manager or stakeholder or a vague guess of the desires of some abstract “target market”. However in the perceived absence of anything better, the team acts as if it were the truth. It becomes a simulacrum.

The effect of this is we have started to give up the belief that we can make a “quality" product. James Bach, in his 2009 blog article “Quality is Dead #1", paints a despairing picture -

“A pleasing level of quality for end users has become too hard to achieve while demand for it has simultaneously evaporated and penalties for not achieving are weak…. When I say quality is dead, I don't mean that it’s dying, or that it’s under threat. What I mean is that we have collectively - and rationally - ceased to expect that software normally works well, even under normal conditions. Furthermore there is little any one user can do about it.”

Bach asserts that the result of this disillusion with the possibility of quality is that management have given up and moved towards the lowest common denominator approach - dispensing of good test teams and moving to cheaper and less capable offshoring teams.

“Top management can’t know what they are giving up or what they are getting. They simply want to spend less on testing. When testing becomes just a symbolic ritual, any method of testing will work, as long as it looks impressive to ignorant people and doesn’t cost too much.”

This “going through the motions” while publicly acting as if quality is being improved, as I see it, is a type of mini-hypernormalisation.

The Question of Measurement


Measuring an attribute that does not lend itself to precise and agreed definition and arises from human feelings and politics is evidently problematic and will lead to inconsistent and tenuous results. Rich Rogers, in his 2017 book "Changing Times: Quality for Humans in a Digital Age" writes -

"If stories and feelings tell us more about human responses than cold facts, dates and numbers, this might provide a clue as to why software development teams, and the organisations they work with, sometimes struggle with the question of quality."

"In this field, refuge is often sought in numbers.... By setting numeric quality targets - sometimes called exit criteria - a desired state can be agreed upon: a point at which the product is deemed good enough for whatever follows, including whether it is ready to be released to customers. The numbers act as a comforting security blanket, creating a sense of control. Even if the picture they paint is troubling, at least the provide a means of seeing that picture."

However as Rich Rogers continues to explain, this is an abstraction that while comforting, is limited and prone to mislead. Crude metrics such as counting of passed test cases and defects assume an equivalence between test cases that isn't likely to exist. They also hide the types of investigation and testing outside of these test cases along with defects that were resolved without the need to record.

"Quality is not measurable in the way cost or time might be measured... There is no such unit of measurement for quality."

He makes the case that metrics still have a role to play in discussions about patterns and potential issues. Reports are only useful in triggering those discussions - about what the metrics tell and do not tell.


The Illusion of Requirements


What about requirements? If one takes a list of use cases, desired functionalities and acceptance criteria - and the product contains all of these working to some agreed order - does that mean we have a quality product?

I believe that this is also tenuous at best and a comforting fantasy at worst. Requirements follow the same rules as "quality" defined by Weinberg. They depend on the judgement, political power and knowledge of the person who defined them - as filtered through project stakeholders, business analysts and the printed page.

The person who conceived the requirements, we have to assume, has a correct knowledge of the problem to be solved. Anyone with experience of software development will know of circumstances where that is a bad assumption. They may arise from a high level abstraction of an existing business process, ignorant of the day to day challenges and tacit knowledge of those implementing the process. The person who stated the requirements may be a high level manager without any experience of ever implementing the process, a slave to the reports of subordinates.

The illusion doesn't end there. Various test case management tools allow use cases and acceptance criteria to be linked to one or more test cases, generating requirements traceability. This is also flawed for the same reason that test case metrics are flawed. They make presumptions that quality in a requirement can be neatly expressed by X test cases passed in an associated matrix, or an hour's session based exploratory testing revealing no defects. This is very wrong as any tester with experience will tell you.

Methodologies and Primacy of the Process


In response to the above, there have been created many different approaches to a software development project along with practices within them. Even within Agile we have XP, Kanban, TDD, BDD, Continuous Integration, Scrum, SAFe, DevOps… Even within testing we have competing schools and methodologies. These are all simulations of how teams work in practice - very rarely do teams follow a prescribed methodology to the letter. How one team defines “agile” can have only a tenuous relationship to another even within the same company.

The danger is that the process is deemed to matter at the expense of the outcome, or implemented to provide benefits to the adherents without respect to the project. For new adherents, as Rich Rogers points out -

"The techniques and tools used in carrying out the work can seem exciting, and there is no shortage of new ideas and skills to learn. The desire to be at the forefront of change, or at the very least to demonstrate awareness and familiarity with current methods, can be a powerful motivator in this field. Adoption of new techniques, training in how to use new tools and acquisition of knowledge and skills related to new ideas, or possibly a desire to enhance a resume."

One area where the hype and impetus to learn is already causing an effect is in test automation. The drive to automate more (if not all) tests, and its effects on tester recruitment, causes testers to devote project and personal time to learning automation tools and programming - and to implement them without experience or real forethought. I know from my own experience the waste and effects of poorly implementing an automation strategy based surreptitiously on a desire to learn a new skill in a commercial context.

Another danger is that processes are defined at the company level and not the team level. Because they were believed to work before, we enforce processes and methodologies on other projects in an effort to "standardise". This may make sense at a management, resource allocation and control level, however projects with ill-considered, enforced processes risk great failure.


Leaky Abstractions in a Complex World


If we understand the flaws in defining quality, measurement and requirements, and our methodologies are idealisations, why do so many teams still persist in making critical project decisions based on their flawed understanding of them? Is it purely ignorance or something else?

Rich Rogers makes the point that metrics, however flawed, act as a comforting security blanket, providing some basis for decision making. We may not be measuring the right thing but we are measuring SOMETHING that may or may not be close to quality.

In the same way, requirements may be incomplete, flawed or at the extreme end utterly irrelevant to the problem we wish to solve, however they do exist. One can use them for contracts, resourcing, planning, development, testing and delivery. In agile methodologies we can continually add changes until (we hope) the end product approaches something the user or stakeholders might want.

In any case these, along with working definitions of quality, are all models of the "real" state of a project or product. For better or for worse they are vital in reducing a development project from something almost ineffable to something that can be planned or achieved. I treat them as "simulations" as defined by Baudrillard. Because of this, they are greatly seductive.

What matters is their relationship to the real state of the product and the development process.


Hyperreality and the Failing Team


My conjecture is that in projects that are dysfunctional or fail to provide what we call a quality outcome, these simulations are so far removed as to be devoid of reality. In the same way that Disneyland is a simulacrum, so are these.

However, simulacra are greatly seductive simply because they are easier than embracing the difficult to manage real world they pretend to describe. Hyperreality is not real but its sway on team certainly is. Also, changing approaches and mindsets within teams and at the company level, especially without either the privilege of management support or a clear narrative of a better solution and how to get there, rates from difficult to impossible. The closer to the project deadline, after so much has been invested and with the probably painful consequences of a late or failed delivery, the less the team will be willing to experiment and the more the team is likely to act is if nothing is wrong. Status reports show green, problems are ignored, process and the simulacra are gospel - and then the Soviet Union collapses.

Nevertheless, as with the final decades of the Soviet Union, it is probable that a few team members at some level know that something was wrong with the system even if they could not articulate the problem and don't see an alternative. They may follow "best practice" to the letter, see great test coverage as defined by the flawed metrics above, defects resolved, requirements ticked off but their stakeholders still complain, or their clients find defects that are tangential to or don't make sense based on the requirements they worked with. Everything is said to be running smoothly but everyone and everything is stressed. This is what I regard as HyperNormalisation - hyperreality with a subtle chink in the armour.

In companies that are greatly hierarchical, have established and immutable processes and teams with egos and dominant personalities reliant on the status quo it is likely that the unnerved team members above, especially if not at senior level, will keep quiet. They may decide that "Nobody got fired for just doing their job", or patiently wait for the end of the project. They may ask to be moved to another project or if the stress becomes an issue take sick leave. They may go through the motions of work without committing fully to it. These are a protest of a sort, but all they do is accelerate the eventual collapse and slump in quality.

The unfortunate truth is that without some shock that forces the team to reflect on its assumptions and processes, most teams trapped in a seductive but vicious hyperreality are not likely to be self aware enough to change until the project suffers dramatically or collapses entirely.


Epilogue


In this text I have covered concepts of hypernormalisation, simulacra and simulation and hyperreality and suggested them as analogies to look at four key concepts in software engineering - quality, measurement and requirements and methodology/process - and their impact on software projects. I have asserted that our inability to understand that these are ultimately seductive simulations with varying levels of relationship to the real world, and the difficulties of teams to reconcile these, cause great risk of projects deluding themselves into poor products and failure.

In Part 2, to be completed shortly, I look at applying these concepts to the IT industry and our tech culture as a whole, and look into various ways suggested to prevent or rescue teams from seductive self delusion. In the meantime, I would be grateful for your comments and considerations.


Friday 20 October 2017

I'm Paul and I'm a Failure (and it's ok!)

Every day I make one silly mistake. Be that misreading some acceptance criteria or use case, setting something to the wrong value in JIRA so that it doesn't appear in a bug report, forgetting to send a document to someone, missing my train stop on the way home, accidentally pulling out my wife's computer's power cable when disconnecting my own laptop, the large number of times I wrote a tweet that in retrospect I shouldn't have...

My life in IT has had a few pretty startling incidents of backing a losing horse. The time when as test lead I fully and excitedly backed my superior's decision to solve our automation problems by buying a licence for HP QTP for tens of thousands of dollars (buying into the logic that spending lots of money of a tool shows undying commitment) going gung ho at creating automated tests in it (my team of three being the only team to do so). When my superior left and it was sensibly decided by the tech management that we would save our money and move to Selenium, my team lost its entire automation suite overnight. I couldn't argue - it was totally the right decision.

Or the time when due to my lack of experience in development, knowledge of the relative benefits of a breadth-first search and willingness to ask for help the recursive database abstraction layer I wrote took days instead of tens of minutes to run (or more often crashed due to an out of memory error - no mean feat in a garbage collection language like C# !)  and caused severe delays to a product release. The fallout from that shattered my belief in my own programming ability for years, but I got over it slowly.

Or the time when I managed to break a database migration system I was coding a multilingual interface for due to the fact that one of the lines I (clearly in an unthinking fashion) translated into French was actually a SQL statement.

Why do I mention these potentially career-limiting paragraphs in my blog? Because whilst I suffer from self belief issues as much as the next person, I don't feel ashamed of the mistakes I made (and occasionally still do), have got over the need to be a perfectionist and slowly recognised them as opportunities for learning and self reflection. Only the most privileged and least willing to take risk of us sail through life without failing at something.

Yet when I read the blogs of some of the other testers and thought leaders in this field (not that I would ever call myself a thought leader in anything), I am unsettled by the lack of humility or mention of the hard lessons behind the advice they give. Of course we have careers to protect, conferences to speak at and consulting gigs to apply for, however none of us were great IT consultants, devs and testers out of the womb. Not One Of Us. Rarely is a career a perfect and graceful trajectory from school, university via junior to senior. We all have mistakes we have made (some of us may well have been fired from jobs and had to bounce back) and opportunities to learn, so why don't we write about them, or talk about them at conferences, or mention them on Twitter?

Why aren't we more tolerant of the mistakes our colleagues, managers or staff make? Do we think we are above them? I once had a boss whose lack of tolerance of perfect work (or for that matter, temper) was legendary. I once heard him shout "I don't pay you to make mistakes!" Was he devoid of errors himself? Not at all.

Was his team a perfectly oiled machine as a result? Of course not, and most hated him. High performance doesn't come about through bullying and threatening people. He achieved nothing but probably high blood pressure. I have seen other attitudes in teams I have worked with that, whilst they are not quite as aggressive or extreme, were no less haughty.

Let's calm down and appreciate our fallibility. Admit our errors. State what we learned in the context of how we learned them. Let's be more tolerant of the honest failures of those work with. That is the only way we will receive forgiveness for our faults in return.

Sunday 8 October 2017

On Testing Technocrats



The agile manifesto, written in 2001, states as one of its aims -

 "Individuals and interactions over processes and tools"

It is hard to imagine how in an agile team, where stakeholders, product managers, developers, testers, BAs and other groups work so closely together and much knowledge is tacit and undocumented, the above aim could possibly be ignored.

Software is ultimately used by people to implement solutions to problems people suffer from. It is created by people. The use cases and requirements that developers implement and testers test against are created by someone to reflect someone's (or some people's) wishes and needs. Testers test to find issues that we hope will never be discovered by people whose problems the software under test is built to solve.

...Which makes me annoyed about the idea that development/test teams can automate everything. The problems that bad software can cause for people are many and varied and the numbers and kinds of ways that a non-trivial application may fail are far greater, often more subjective than we can possibly plan for in advance. Requirements and specifications have subtle holes and areas of tacit knowledge that risk creating a product that does a wonderful job of something people don't actually want.

I am a big fan of test automation in its rightful place. As a regression tool it frees the tester from hours of tedious and often unfruitful checks to concentrate on those areas that require more exploration, analysis and thinking. It provides stubs and mocks so that the developer and tester can continue their work without the full integrated system being ready or available. It creates reams of test data of whatever attribute is necessary so that we can continue our work without laborious setup. It helps us perform checks with multitudes of simulated users to discern areas of poor performance. For all of the above however it doesn't replace the thinking mind of a competent test analyst. The kind who is adept at putting him or herself in the frame of the user, with all the complexities this involves.

But some of us feel that we can automate all of this away for 100% test coverage by automation, or that some future AI will make all non-automated test approaches redundant (not convinced that this will ever happen). I wonder, is it all about cost-cutting? The bottom line?

Or is there a type of person who thinks that the ambiguous and complex can be completely reduced to a set of checks and algorithms? That the human dimension, the outlook that human beings provide, can be abstracted away by technology? That what people see as quality can be reduced to a set of Yes and No answers. We hit the button, stand back and like the great tech sausage factory, we pump something in, our list of deviations comes out. Rinse and repeat with a few fixes and we get quality at the end...

I call these types the testing technocrats. By reducing our work to purely algorithms and checks to be done by machines - and quashing the thinking factor - they reduce the wishes and concerns of the users of our products to exactly that.

All in the hope of getting releases out faster, saving a bit of money and not to have to deal with the complex findings that a thinking human tester provides. This has been called out as wrong many times but often to deaf ears.

Test automation is an amazing thing that lets us achieve great efficiencies. It is not a replacement for the thinking human element. As testers we should be taking every effort to continue argue against those who believe it is.

10 minute blog - Our ideas don't exist in a vacuum

(The first in a series of very short, regular articles, no more than 10 minutes in conception and completion, fairly whimsical in nature and an exercise in  keeping in the habit of writing regularly and getting over short writer's blocks).

There is a school of thought that says that ideas come out of quiet and reflection. Ever since Siddhartha Gautama sat beside a tree, meditated and became Buddha, we think that we can quietly structure great ideas from base principles. Maybe if you are a philosophical genius, apart from that I don't believe a word of it.

In the last few months, as some have probably noticed, I have spent a fairly large (non-work hours if my boss is reading this 0:-) ) time on Twitter, reading and tweeting in the various testing threads and chats. I really enjoy doing it. I'm not sure I add much to the debate - I am anything but an expert or guru in testing - however if I didn't do it then I would struggle to have any ideas to write about. This blog would die a quiet death. Very few people will come up with a truly original idea - most articles are just variations on a theme - however the best ideas you will see these days are formed in and survive the cauldron of debate.

How can we risk our ideas being repudiated, shot down, publicly ridiculed, called out for BS? I don't think we can avoid it. Many of my ideas have been shot down or ignored - and that's fine because on retrospect some of them were shit anyway. However the alternative is not saying anything, and you miss all of the shots you don't take.

Saturday 16 September 2017

On Testing this Pen

I am currently flying to Wellington, NZ to attend the WeTest 2017 Conference. I have just had two interviews last week for another placement with clients of my employer. It was while reflecting on this that a memory about one of my most early interview experiences came to me.

Years ago when I was a young, uncouth and very poor university graduate, I spent a year doing quite unglamourous jobs in Manchester, UK. The first of these was as an outbound telesales person "selling" credit cards over the phone. Suffice to say that I was a woeful salesman and after two weeks of utter mediocrity someone took pity on me and I was allowed move to a much less stressful data entry position with the same company.

What makes this story testing-blog worthy is the interview. The interviewer took a Bic biro and held it up in the air, asking me that most common of sales interview questions "Sell me this pen!"
I had never done sales before but the recruitment consultant who placed me had given me a primer on just this question. One isn't supposed to try to market the pen immediately. The interviewer hasn't yet stated what he or she wants in a pen and you may sell it all wrong. You ask questions about the interviewers' needs and circumstances - what documents will they write with it? What is their pen budget? How long do they want it to last? Will it be used on a paper surface, and what type? Does the interviewer like pen lids and clips that fit over a top pocket? ..and so on.

The candidate then looks at the pen's qualities with respect to the answers to the above and tries to find selling points in the pen that match the interviewer's needs. "You say you like pens cheap - nothing cheaper than a Bic".. "You don't want any of that leakage and filling up with ink hassle - not much of that with this biro!"... "You have a penchant for blue - here is a lovely blue pen!" etc..

Years later when I first started working in testing in London and meeting other testers at meetups, I did hear from others that it was common for a while for testing interviewers to take out a biro or ballpen and ask "(How would you) test this pen?" Nobody liked this question as it seemed obtuse and somewhat demeaning to ask this to an IT professional. It appears to have gone out of fashion and I have never been asked it and known of anyone else ask it in recent years. Nevertheless, as it does for the mindset of a salesperson, it does give the interviewer to gauge the approach and mindset of a candidate tester - especially a relatively inexperienced one.

How do candidate testers approach this question? Do they attack the pen with scenarios immediately - without knowing what the interviewer values in or wants to do with a pen or do they ask the right questions? Maybe the interviewer only likes things with the colour blue - the red biro you were given would be a severe no! Maybe there are multiple interviewers each with conflicting expectations of what their ideal pen would be lie - this could be explored further until some consensus. For writing signatures to formal letters to clients a fountain pen in order. To scribble notes into a notepad whilst exploratory testing, a biro or cheaper ballpen would do the job.

Does the tester then start scripting a set of test scenarios before testing? If the pen turns out to have no ink in it or the lid doesn't come off then most other tests will be blocked and it may be largely wasted effort. What about if the damn fickle interviewers want to change the pen requirements or don't have many requirements at all? Maybe a tester more inclined to an exploratory approach will have more luck. How would one approach that?

What sort of risks does the tester come up with? A terrible leakage accident? We could have a risk of that in your top pocket or in that vital signed release document! How about running out of ink at the worst moment? How about the lid being lost and ink drying up - maybe a clickbutton pen might be better!

What does the candidate suggest regarding edge cases? Can we test the pen on parchment? How about writing out the pen on a huge roll of paper until the ink runs out? How about subjecting it to 35-40 degree heat for those days when the air conditioning beaks - some pens may fail or leak in this temperature.

The interviewer may ask how the pen test result will be reported. A full test summary document? A simple set of pass/fail results? An extract from HPQC? A simple review meeting? What tools does the candidate say are required?

These are all largely facetious examples - however pondering on the points above is a huge part of what we testers do. A simple question, largely forgotten, can reveal so much....

Maybe it's time we started asking about how we test our pens again....

Wednesday 13 September 2017

On Blogging - the Fears and the Inspiration

Whilst I don't write as many blog articles as I would like - and it sometimes takes a lot to get me started, I do immensely enjoy the act of writing tech and testing blog articles. The act that any tester, even someone as ordinary and relatively unknown as I, could write and self-publish (unedited by others) something that expresses personal thoughts and experiences related to tech - and yet without cost to me could be read by and stimulate thought and conversation for anybody in the world as soon as one posts it - is the kind of marvel that the great writers and philosophers of the past would have killed for. We take it for granted.

Nevertheless, it took me ages to overcome my fears about putting my thoughts out to the world. I had written blogs about other things that seemed more trivial in nature (My Another Sydney Blog about less touristy areas of Sydney was a good example) but writing out about Testing?!?

Fears I Had....

1) What about if I write something awful and contentious or get something wrong and end up vilified on Twitter?

2) What about if my current or future employer took offence at it?

3) Testing is still a landmine of arguments and fierce debate even today. What about if my experience didn't match up with some perceived glorious vision of modern testing practice - I'd look like some old fossil!

4) What do I have to say that the testing experts and gurus haven't already said? I'm just an average journeyman tester who had never spoken at a conference (at the time) or written a bestselling tech book, studied CS at Oxbridge or MIT or worked at Google. Who cares what I think?

5) I don't know how to write like a pro. Will people be interested?

...and others I have probably forgotten.

Of the fears above, some can be real issues whilst some are laughable and more indicative of my own issues with imposter syndrome at the time, however I am sure that they do put off others from sharing their thoughts and experience online.

So how did I get started? By chance actually. I went to a talk many years ago by the testing expert and prodigious blogger James Bach at Google Sydney - not long after reading his great book "Secrets of A Buccaneer Scholar", then having the pleasure of talking with him afterwards in person and by email about writing articles. He was extremely encouraging on the topic of blogging and if I hadn't met him it is quite likely that I would have never got started. I asked him what a tester should write about that hadn't already been said. His answer "Write about your experiences!" Perfect...

My first efforts were tentative and not great. I wrote a rather muddled and unsatisfactory article about ISO29119 - the impending software testing standard that ended up widely hated and disappeared. My next article, taking the de Montaigne approach, was a very personal article about my own Imposter Syndrome.

From my first few tentative blog articles I entered a period of fear and procrastination that lasted for over two years until I started writing again in early 2017. The last several months has been a period of experimenting with different writing styles and finding my own "voice" - which I am nowhere close to yet, however there has been improvement and an encouraging reception.

What about the fears I mentioned above, which caused so much hesitation and procrastination? None of them have materialised at all (thankfully).

1) I have made mistakes and written about contentious subjects but the feedback received has been overwhelmingly positive. Where criticism has been levelled, it has always been polite and constructive. I presume that for most new testing bloggers not writing controversy for its own sake, their feedback will also be civil.

2) Of course it is possible that your employer could take an issue with your blog article, however I think companies are generally supportive of their staff blogging as long as some details remain confidential. My employer has generally liked the blog and @TestingRants twitter handle and I have had likes and retweets from its social media (much appreciated).

3) Testing is a large enough field with diversity of practice for all types of experiences to be shared. For all the companies practicing agile, exploratory, devops, continuous integration, various levels of automation etc. there are still lots of companies taking traditional waterfall/V-Model approaches doing scripted manual tests (prepared, matched carefully with requirements, stored in HPQC to be approved by a BA or stakeholder beforehand). My last project was very similar to this. I do expect this to diminish over time however - even so, with the incredible rate of change in IT the future of testing and QA is unwritten. We can all write about our experiences without embarrassment.

4) This is probably the most laughable and mostly indicative of my struggle with imposter syndrome at the time. My experience is that if you put your thoughts out in a blog and make considered and rational points, people will read it and appreciate it - if only because the act of writing honestly about one's work is challenging. However we all have something others can learn from and nobody started off a testing expert. This was developed through years of experience and reflection. Also, whilst prestige does matter in IT to a degree the vast majority of dev and testing bloggers have never studied CS at Oxbridge or MIT or worked at Google and do just fine!

5) The reality is that your first set of articles will be woeful to middling. Everyone starts off like that and the chances we will ever truly write like a great a very slim anyway, however don't let that put you off. You will be struggling to find a writing style that suits you, and many times something to write about. Your content may be inspired or random more than considered - mine tends to be. You may look back at it and think "What the F**k was that?", however the reality is that when written from a position of personal experience and thinking about the field, your article will be interesting and useful to someone - and over time your following will grow as your confidence and skill grow.

So if you haven't yet started a blog, come and join us! We don't bite, and sometimes we even offer kind words and encouragement! Get started!

And one final note - Thanks James Bach for your kind words and encouragement at the beginning. They meant a lot.

Friday 11 August 2017

On Why the "Testers Should Know How to Code" Mantra could Hurt Us All

On 31st July 2017, the tester and blogger Trish Khoo initiated a twitter poll on the topic "Should all testers be learning to code?". This created a near 50% split in opinion and numerous twitter comments for either side.



As a tester with a coding background who is also studying a computing degree, I took a great interest in the debate and read up other perspectives on the subject. There are various competing arguments - a selection of which I have outlined below.

Perspectives on Testers Knowing Coding


Trish Khoo, in her follow up blog to the survey above, "Yes all testers should learn how to code", argues using Australian, US and UK sources that a basic level of programming is now routinely being taught to schoolchildren and will be seen as much of a fundamental skill requirement as Maths and Science. All testers working in software development should know programming to future-proof their careers.

Joel Montvelisky's very detailed 2017 article for PractiTest "Stop being a NON-Technical Tester!" advocates that a tester should have sufficient coding skill to do the following -

  • Understand the Architecture of the Product under Test
  • Review the Code under Test (i.e. SQL Queries, scripts and configuration files)
  • Automate repetitive sanity and smoke or setup tasks
  • Use free or paid automation tools such as Selenium, QTP etc.
  • Troubleshoot from Logs and other System Feeds
  • Run bespoke SQL queries
  • Talk the language of their technical peers
Elisabeth Hendrickson's seminal 2010 blog article "Do Testers Have to Write Code?" argues that only testers doing scripted automation require programming skills, however extends her argument to include her own survey of US tester job ad data to advocate that every person serious about being a professional tester should know at least one language (she recommends SQL) as a minimum. In a way, her argument is in the same category as the one argued by Trish Khoo.

Rob Lambert's 2014 Social Tester blog article "Why Testers Really Should Learn to Code" states Elisabeth Hendrickson's position far more bluntly - the job market demands it and testers who cannot code are committing career suicide and will be pushed out of the job market by those who can.

Michael Bolton's article "At Least Three Good Reasons for Testers to Learn to Program" does not oblige testers to know programming (in fact in the comments he advocates diversity of skills at the individual and team level), however he advocates learning to provide more opportunities for tooling, insight into how computers and programs work (and may not work) and humility and empathy with programmers about the fact that coding can be very difficult.

Alessandra Moreira's 2013 article "Should Testers Learn to Code" takes a balanced approach referencing other articles including those above without necessarily taking a firm stance. It makes the point that many good testers cannot code and are still effective and that not all testers actually enjoy coding.

I think the debate actually breaks into two different questions.
  • Are there benefits to testers to learn some coding?
  • Should all testers be expected or obliged to know how to code?

My Perspective

Based on my own experience I think that the answer to Question 1 is undoubtedly yes. There are many benefits in testing generally (although not all testing jobs in equal measure) to knowing some programming. My knowledge of SQL (from my pre-testing days as a DBA) has been a great boon to my usefulness and success in various web and data-centric projects and I have been able to bring test automation and data creation into environments that did not have it previously due to a knowledge of C# and Java. Coding knowledge opens the door not just to test automation but also interesting and important areas such as penetration testing, unit and integration testing (usually done by developers but with increasing involvement with test analysts) and performance testing.

Question 2 is far more problematic and a firm answer of "yes" - taken as standard in the industry - I believe would be wrong and dangerous for the testing community. Some reasons are stated below.

1) Exaggeration of the Importance of Programming Skill


Programming is a very useful skill, however it is not critical for all roles. There are still many functional testing projects that carry on just fine with purely manual testing (my last role was one of these) or where automation is impractical or done by an existing development resource. A keen tester looking to code heavily, like a fish out of water, might well be frustrated and atrophy in these roles. Common sense and judgement needs to be applied.

Elisabeth Hendrikson notes -

"Testers who specialize in exploratory testing bring a different and extremely valuable set of skills to the party. Good testers have critical thinking, analytical, and investigative skills. They understand risk and have a deep understanding where bugs tend to hide. They have excellent communication skills. Most good testers have some measure of technical skill such as system administration, databases, networks, etc. that lends itself to gray box testing. But some of the very best testers I’ve worked with could not have coded their way out of a For Loop."

There are other skills required for almost every testing job - planning, critical thinking, tenacity, conscientiousness, teamwork, written and verbal communication, soft skills, management and leadership, bug reporting and advocacy. Testers without any programming skill looking to get into and advance their careers would be better served improving in these areas first instead of learning coding from scratch. Hiring managers for projects where test automation is either not required or can be allocated to others would be best served by hiring for generic core skills as emphasized above as opposed to having a tester who can program "just in case".

2) The Plurality of Tester Backgrounds and Perspectives will be Damaged


The testing field is uniquely accommodating to those from a wide range of backgrounds and disciplines - very few of which would have required programming - who can provide a plurality of perspectives and immediate utility.

  • Testers from business and industry fields, BA and the service desk can bring domain knowledge and a commercial, user-oriented focus and perspective and experience of the kind of failures that are most critical and should be looked out for.
  • Those from science and engineering backgrounds can bring great analytical, mathematical, experimental and system modelling skill.
  • Those from arts and humanities backgrounds are good at analysing data from diverse sources and documents and can provide great verbal and written communication and reporting skill. Musicians and foreign language grads already deal with complex systems riddled with rules and exceptions. They will find their niche in testing.
  • Some of our greatest testers have had no formal education but great practicality, a hard work ethic, passion for technology and no shortage of analytical or soft skills.

As a community, and following on from 1), we need to protect this diversity of background and perspective if we want to achieve great things for our clients and employers. Requiring that testers need to be coders by default applies a needless barrier that prevents those from outside who can provide much to software development teams from getting a foothold in the industry.

3) How Much Programming Skill is Enough? In what Areas? How would this be demonstrated?


If we impose a requirement that all testers, upon entering the profession, require programming skill to be hireable or useful, the testing community would probably have to define a basic curriculum at least as a guide. What is the base minimum? Even in roles that require some programming skill, the "amount" required is highly contextual.

Montvelisky states in his article that the base minimum would be the ability to edit and read configuration files, execute SQL queries, automate test setup tasks and use frameworks such as Selenium and QTP / UFT. As a minimum this sounds reasonable however considering the sheer numbers and flux of operating systems, setup tools, scripting languages and test frameworks out there, from a learning perspective even this is enormous work.

  • As an example, we may ask for simple scripting skills. Do we expect knowledge of Windows Shell, Powershell, Linux BASH, Perl, Python, even JavaScript for NodeJS? All of these are extremely useful to know and I have used some already at work and study.
  • For data retrieval, we would probably mandate SQL as a minimum however NoSQL DBs such as MongoDB are extremely and increasingly popular these days. Can we afford to miss it out? Why not also REST API and Web Services tools? JSON and XML?
  • Regarding test automation frameworks, we have Selenium, Postman, SoapUI, Cucumber etc. but are these enough? Many companies in the corporate world still use QTP, SilkTest and TestComplete - each with their own scripting languages and tooling. We would struggle not to include them - and since they are high cost and proprietary it is difficult for students to get their hands on them to learn outside of teams that have them. Even Selenium contains bindings in several languages including Java, C#, Python and JavaScript.

The above would be difficult to achieve even for recent CS and software engineering grads. The testing community cannot even reach a consensus on what is testing and what is "checking", so could it agree on a basic programming requirement? I doubt it could and this would cause uncertainty and confusion for all of us. Groups like the ISTQB and IEEE would be tempted use the vacuum to impose a minimum test programming standard (and even provide certification in it), which others such as the Context Driven and Rapid Software Testing communities would fight hard to resist - creating another great schism in the testing community.

We could let individual recruiters and teams decide (as is done in development), however this would mean that since it is impossible to learn enough to be at a professional level in all of the above, new testers would have to specialise before even getting their first jobs - the decision closing off large areas of the job market as a result. Established testers who may have no interest in programming whatsoever will have to spend enormous time and resources to upskill, specialise and learn tools they don't care for just to be considered "adequate", despite their skills in other areas and otherwise stellar achievements to date. Is that fair to them?

Mandating that testers must have some minimum programming knowledge opens up a minefield of questions and concerns that the testing community will struggle to agree on. This could well lead onto point 4 below.

4) We Encourage Unhelpful and even Lazy Recruiting Practices


As someone who has done commercial development in the past, is studying computing at postgraduate level and takes a great interest in programming now (although happily committed to being a tester), I spend some time looking at the various IT and dev industry forums. Regarding recruitment, there are various complaints I have come across from others -

  • Employers and recruitment consultants that "demand the world" - requiring an unreasonably long list of programming languages and frameworks, rejecting any applications with slightly different but still quickly transferable skills, frameworks and underlying concepts.
  • Job Ads requiring for "entry-level" developer roles requiring CS degrees (even for relatively simple programming tasks that can be done by non-graduates) and years of commercial experience - suspected simply existing to cut down the number of applications.
  • Job Ads requiring years of commercial experience in the latest and trendiest tools of the day, which hurts the chances of those outside of new startups and innovative projects, those developing in BAU and corporate environments using long-established tools and older programmers.
Without a consensus on Point 3, I suspect that some of the more lazy and unhelpful recruitment practices mentioned above will flood into testing recruitment. I have already seen some ads for experienced testers requiring a CS degree as a minimum, which disregards the skills, achievements and experience of those with backgrounds from other domains but who could still do the job.

Final Points


Dorothy Graham, in her 2014 blog article "Testers Should Learn to Code?", is strident that a mandatory coding requirement is a dangerous attitude and lists various thought provoking reasons, some that overlap the above. A selection -

  • Test Managers can use this as a justification to get rid of good and productive testers.
  • Not All Testers will ever be good at or interested in programming.
  • Devaluing of testing skills over coding skills
  • Tester-Developers will either choose to or be forced into being developers, thus we lose people from the testing profession.

I agree with her opinions, and hope that this blog article and those linked to continue to be a useful part of the testing-coding debate. An expectation that testers must be able to code may not help and in fact cause needless chagrin in our profession.

Monday 7 August 2017

On the Need to Test for Beauty and Elegance

I have spent the last month off work, visiting parts of the UK, Paris and Seoul with my wife. During this time I have spent much time engaged in visiting museums and historic sights and strangely enough pondering over my half-formed and admittedly likely flawed ideas regarding what beauty and elegance is.

What IT products would end up in a museum or art gallery in 100 years time? Computers that didn'ty look great but represented an impressive landmark in CS such as Tom Kilburn's Baby? Beautiful and stylish Apple iMacs and iPads? More "functional" but equally noteworthy VIC 20s and Atari STs? What about software? Computer games such as the Tomb Raider and Destiny series are beautiful to look at and very fun to play but the much more mundane Microsoft Office has been more important to people in a wide range of areas. Can one genre be seen as "art" and elevated to the Louvre one day, sharing a wing with the Botticellis and the Venus de Milo, and the other deemed more fitting for industrial museums such as the venerable Museum of Science and Industry in Manchester - sharing a wing with the steam engines and 19th Century cotton weavers? Is this a poor distinction to make and should all software be treated equally?

Sandro Botticelli: Madonna and Child with St. John the Baptist, c. 1470–1475, Louvre (CC: Wikipedia)
Replica of Tom Kilburn's Manchester Small Scale Experimental Machine, nicknamed "Baby". The world's first stored program computer. (CC: Wikipedia)



How many development teams see their end product as aesthetically pleasing and write this into their requirements? Who is the best judge of something so abstract and intangible - a UI expert, key users, the testers or the manager? In the middle and latter stages of a project, when the deadline is upon us and the resource restraints come thick and fast, how much of aesthetic quality sacrificed to reach the goal? How often do testers actually raise an issue that a design or its implementation is ugly or inelegant?

I have worked with some (what I regarded as) damn ugly and inelegant software in some past projects, yet I admit that rarely have I put up my hand and stated that some feature was just not stylish or pretty enough. I have improved on this recently. In my experience the all too common wisdom in teams has been that it was "what was agreed in the spec" or "what the client wants" or "what the designers/devs came up with and it is too late to change it". In terms of aesthetics, I have simply tended to focus on useability and how closely the user interface matches the design. In this respect I should have done better and fought against the perceived wisdom - in some cases nothing more than excuses for inaction.

However useability is not the same as elegance and "matching the design" is not the same as beauty. All software aimed to be used by people will be judged by its aesthetic qualities and elegance. We want software that elates the heart as well as satisfies the need. Testers have a critical role to play in achieving this.

The focus (especially in agile development) on inclusion of features above all else - done in a rushed fashion without care for aesthetics - detracts from the elegance (where simplicity is a large part) and beauty of the eventual product. Even after production deployment, we create environments where the final has new challenges to cope with - be they oversized and dynamic ad content that change the layout of web pages or overly complicated and badly designed later-introduced game levels and characters - that give a poor impression to our users.

This can also be caused by testers being brought in at the development (and later) stages of the project - after requirements and design are "locked down". The current trend towards "shift-left" may resolve this. Another cause for certain projects heavy focus on meeting the letter of requirements over thinking about what is best for the users - which takes leadership from the stakeholders and project management down to  resolve but for which testers must be willing to ask the difficult questions to achieve.

Regarding who is best to judge what is beautiful or elegant or what even these terms mean in IT, I have no definitive answer and put the question out to the community. There are web and UI design standards which can help but they do not solve the problem. At one level, product walkthroughs and the inclusion of management and user views at every iteration can at least provide opinion and consensus (where all are allowed to speak with equal merit), however even in these cases the focus can tend towards what is functionally sufficient and not the aesthetics.

I implore all of us working in development teams to challenge views about design and look and feel as early as possible in the lifecycle. Testers should ask hard questions about the aesthetic qualities of the product, however without acceptance from stakeholders, management, business analysts, designers and developers nothing will change. Clearly there are many examples of stunning and elegant products created that people like as well as use (Apple is a case in point), so good practice does exist. We all need to find and follow it.

Friday 7 July 2017

On the Last Three Months - My Conference Talk, Free Learning Resources and Learning Security Testing

Dear All, apologies for the lack of recent blogs.

It has been a hectic three months, however most rewarding and interesting. I have condensed some of the most interesting events and activities below.

Quality Software Australia 2017 Conference


I had a talk proposal accepted and was thus invited to speak at Quality Software Australia 2017 in Melbourne, Australia. The talk, which was well attended and received, was on "Generating Random and Fake Test Data for Functional and Fuzz Testing" - you can find the slides here. A precursor to the talk was done the previous year at the Sydney Testers Meetup Group and repeated a month before the conference as an Avocado Consulting (my employer) Brown Bag lunch event.

As part of the demo for the talk above I created an Excel 2013 and above add-in to automatically populate fake user name, address, phone number and other data. This is available freely and can be found here.

While I have done a few talks before at Sydney Testers, this was my first conference talk and only my second ever test conference attended. I didn't quite know what to expect from the organisers, other speakers and attendees especially since I don't have a high profile as a tester. I needn't have worried. The organisers, led by Rajesh Mathur, along with other speakers were extremely friendly and amicable and knew their stuff. The speakers were treated very well by the organising committee and I made many new acquaintances. I took great pleasure from speaking with attendees and the great talks presented by people passionate about their subject.

All were great but the highlights for me included the first day's keynote speaker Mike Lyles, whose talk "The Drive-Thru Is Not Always Faster" was a masterclass in energy, poise and preparation (despite the fact that he had just arrived from a delayed trans-Pacific flight a few hours before the keynote!), Smita Mishra's thought-provoking discussion "Debugging Diversity", David Bell's talk "The continuum of certificates and skills" and my Sydney Testers colleague Sunil Kumar on "The no-man’s land of microservices & its testing". Sadly due to my work commitments I was not able to attend a further day of seminars, something I would probably have greatly enjoyed.

The Quality Software Australia team is conducting a conference in Sydney, Australian Testing Days 2017, on the 30th October. If it is anything like the above and you happen to be in Sydney at that time, I would fully recommend it.

Free Learning Resources for Testers


On the 20th April, frustrated by the lack of a centralised, unbiased source for testing resources and with an idea of setting up a free curriculum for new and improving testers, I created a Github project for a library of links covering a wide area of testing articles, videos, blogs, podcasts and associated computer science and programming resources. Currently at least 300 links have been added.

This was promoted on Twitter, LinkedIn and at the Lightning Talks section of QSA2017 (see above) and has been well visited - being starred by 72 people and forked by twenty, with six external contributions in total. The resources are being frequently updated and extended, and it is worth a visit.

Security and Penetration Testing


The university where I am a student, the University of New South Wales (UNSW), started a new and innovative CS lecture and lab-based course module on web security and pentesting. This being an area I was quite ignorant in, I signed up to and studied it part time for about three months.

The course, based largely on the 2013 OWASP Top 10, covered such areas as -

Threat Modelling
Cross Site Scripting (XSS)
Authentication Bypass and Privilege Escalation
Shell and SQL Injection attacks
Directory Traversal
Social Engineering
Third Party Vulnerabilities (i.e. flaws in Wordpress)

The main lecturers for the course were practitioners of security testing at Commonwealth Bank of Australia, Norman Yue and Abhijeth Dugginapeddi. As teachers go, I found them amazing, passionate and knowledgeable. I found the course a real pleasure despite being very challenging and hard work in places.

As part of the community contribution for the above I created a tool for the automated execution and display of HTTP requests and responses from a list of URLs and parameters in an Excel spreadsheet. This is freely available at and can be found at my github.

Penetration Testing and Security are seen as a niche area almost separate from the wider testing community - seen as an afterthought and done by a separate team almost at the end of the development lifecycle, however with the critical risk caused by data breaches and numerous cyber attacks by state and non-state entity in recent years the testing community cannot afford to be ignorant about web security. I believe that this should be treated with the same attention and respect as say functional and performance testing, with security standards and prevention at the requirements and design phases of the project and all of us (while not necessarily having the knowledge to be pentesters) aware of the kinds of attacks listed above and how to mitigate them.

In view of the above, I arranged that Abhijeth Dugginapeddi and Norman Yue speak about web security testing at the Sydney Testers meetup at Thoughtworks on 5th July. Their lively and most informative talk "Security for Non-Security Engineers", very much on the lines of the course above, was recorded and can be seen here. It is well worth watching.

Monday 22 May 2017

Circumventing CCTV and Web-Based Home Security Systems




(This article was a team submission with Blake Dutton, Luke Cusack and Ryan Shivashankar for an exercise as part of the Web Security and Pentesting course COMP6443, University of New South Wales. Already submitted, reprinted for your perusal.


It goes without saying that this is printed for informational and academic purposes only and should NOT be used for illegal or unethical uses. Certainly we who wrote the article see this as nothing more than a case study in the vulnerabilities in smart home security systems , done as a university assignment for an IT security course, and would never advocate or do burglary,  black hat hacking or DDOS attacks)



For a device exclusively designed for the purposes of security and surveillance, CCTV cameras (particularly wifi-based) are surprisingly easy to hack and jam. As threat vectors in themselves, they contain significant vulnerabilities - particularly easy default passwords, that provide a great opportunity for a hacker to take over and use for nefarious means.

A Case Study - Home CCTV cameras - the ultimate botnet?



In October 3rd 2016 Techworm reported about the distributed DoS trojan Trojan.Mirai.1, which infected over a million IoT devices running on Linux architectures. Of these, an unspecified large number were internet-linked “smart” home security cameras. This enormous command and control network of infected IoT devices was used on September 20th 2016 to conduct a 620GB DDoS attack on the website of web security journalist Brian Krebs.


This was followed by a 1 Tbps attack on French hosting company OVH and on October 21st two huge and well publicised targeted attacks on Internet Performance Management company Dyn which were mitigated well but resulted in significant performance impact on its managed DNS customers and their end users. The company’s blog states an estimate of 100 thousand malicious endpoints with a ‘significant volume’ originating from Marai-based IoT botnets. Armies of infected IoT devices, including smart CCTV cameras, used on demand as a botnet are a dangerous threat to web applications and infrastructure. Smart home CCTV cameras, which are designed for ease of setup and administration and whose passwords are rarely if at all changed, are a particularly soft target.

Scenario 1 - The Jamming Threat

Another weakness of many home CCTV cameras is their reliance on wireless radio transmission. More recent off-the-shelf home security devices such as the popular Ring system of floodlight cams, video doorbells and motion detectors communicate using 2.4GHz WiFi, with live video streams and interactivity accessible via phone and web apps.

Generally the FCC and other agencies restrict the frequency ranges of wireless security systems to 433MHz / 800MHz / 900MHz / 2.4GHz / 5GHz, with 2.4GHz being by far the most common. Security devices (especially in the US) are legally required to list the frequencies they broadcast on - these can be easily found via a web search.

Their use of wireless communications is a significant weakness that can be exploited by savvy burglars. The range 900MHz to 2.4GHz is the typical range of most cheaper off-the-shelf wireless signal jammers (as can be seen and easily purchased from sales sites such as JammerAll). In fairly typical tools such as the Portable 8 Bands Selectable Man-carried GSM 2G 3G 4G Cellphone Lojack WiFi & GPS Jammer, bands 2,4 and 5 would block all but the 433MHz (for motion detectors) and 5GHz ranges (although some other jammers do target these frequencies), making it effective for blocking most wireless home security cameras. The downside is that cheaper pocket-sized jammers tend to have limited range (~ 20m).

Some wireless CCTV and home security systems, particularly SimpliSafe, counteract this by using a functional anti-jamming algorithm to alert the owner of a potential jamming attack.

The Basic Jamming Attack


In 2015 CNET published an article describing a plausible attack vector that could be used by a thief looking to burgle a small, locked home using CCTV cameras and motion detectors provided with an advanced wireless security setup such as SimpliSafe.
In this case the burglar would need to know the following -

  1. The position of all CCTV cameras, doorbell cameras, smart floodlights and motion detectors in the property and the layout of walls and floors.

  1. The frequency of the wireless signal

  1. The algorithm used to distinguish a jamming signal, and thus tools for how to hide the signal. In the case of (for example) Simplisafe, the algorithm is proprietary and under regular evolution, making hacking it a difficult task.

It was presumed that the most likely burglary scenario would be opportunistic breaking and entering, which accounts for about ⅔ of all residential burglaries in the US.

In this case, a burglar would need to jam the signal right from the start, since breaking a window or opening a protected door would trigger an alarm. When inside the property, the burglar would need to maintain the jamming signal at all times to prevent discovery by motion detectors or internal CCTV. This may require several jammers positioned at different points, which would require time to set up. He would have to maintain a jamming signal outside at the same time. This would be difficult to achieve and carry considerable risk, especially considering that the burglar’s jamming equipment would also need to be configured to send a signal that would be hidden from a (proprietary) anti-jamming algorithm - something the burglar would find difficult to have advance knowledge of.

It is thought that while this kind of security attack is possible, the opportunistic nature of most burglaries of residential property does not gel with the level of sophistication involved. CNET concluded that most burglars would simply move on and target properties not containing smart CCTV security systems.

Alternate Attack Vectors

We have considered alternative attacks against wireless defence systems such as the above
  1. Cutting off Power

One possible attack is to -

  1. Cut off mains power from the outside somehow (i.e. at a street electricity box or via cutting overhead lines)

  1. Break into property without risk of setting off the alarm or being picked up by smart CCTV, motion detectors, smart locks etc

An attempt of this sort happened to properties on my (PW) street about six months ago in broad daylight. Neighbours saw a man tampering discreetly with an electricity box. He ran off when challenged and threatened with the police. A more successful approach would have been better off executed in darkness of night, having scoped out houses where occupants were away on holiday.

Some WiFi-based home security systems, including the aforementioned Ring system, get their power sources from rechargeable or replaceable batteries, which would obviously be resistant to the above.
  1. Faking Security Faults


This would work by using a jammer to provoke several consecutive false alarms to be raised by a smart CCTV / Home Security system implementing an anti-jamming algorithm (such as Securisafe). This works on the hypothesis that while the house owner would immediately call the police or raise the alarm if an intrusion is detected or CCTV reveals an unknown presence in or around the property, owners who are not computer savvy may not know how to react to repeated alarm raised by signal jamming where no obvious cause is in sight - especially at night if there is no 24 hour support. The temptation would be to regard it as a fault in the system, turn off the system and return to bed to wait until morning, leaving the house temporarily defenceless.

  1. From a safe distance and late at night, using a jammer tuned to the frequency of the CCTV home security system and having enough strength, execute a series of jamming attempts such that the alarm is triggered.
  2. After each time the owner reacts and returns to bed, check to see that there is a signal or execute the jamming attempt again (a short time after).

  1. If no more alarm is sounded, the security system may have been switched off. To the would-be burglar, this is obviously good.

  1. Break into property some time later

This obviously depends on -

  1. Knowing the brand and type of smart security system used and its broadcast frequency

  1. Having a sufficiently strong, directed and sophisticated jammer

  1. No other means of detection or raising the alarm present (i.e. by dogs or other pets or occupants still awake)

  1. A great deal of luck

Once again, the above approaches require prior knowledge and work and do not gel with the opportunistic, low-fi nature of most burglaries of residential property. Another aspect is the low range of pocket jammers, requiring a burglar to remain close to the property for long periods and risk discovery. A burglar would see more sense in simply targeting less well-protected properties nearby.

Scenario 2 - Hacking into Smart Camera Security Systems

Another option to disable smart camera security systems on linux platforms is to hack into them and either disable them or take control. As shown before in the case of Trojan.Mirai.1, smart IP-discoverable security and surveillance cameras often have their own admin security as an afterthought and more often than not will have simple, rarely if ever changed default user accounts and passwords - which are well known and targeted by malware. This is common with IoT devices generally. Brute force attacks on Symantec’s honeypot in 2016 (as reported in Symantec’s blog article on 22nd Sep 2016) show the top usernames and passwords used by malware to target IoT systems generally.

Top User Names
Top Passwords
root
admin
admin
root
DUP root
123456
ubnt (targetting ubiquiti routers and equipment)
12345
access
ubnt
DUP admin
password
test
1234
oracle
test
postgres
qwerty
pi
raspberry

The most common attack (particularly for malware distribution) is as follows -

  1. Port Scan of random or targeted IP addresses where open Telnet or SSH ports

  1. Brute Force logon attack with common credentials like those above

  1. Once access is gained, use wget or tftp to download a shell script to the device that can be used for access and control

  1. Where this is the goal, download malware and bot software corresponding to the operating system accessed.

This can be amended to disable, retrieve login data or take control of security systems as required.

Exploiting Weaknesses

There have been many instances where even the above is not necessary. In 2013 NetworkWorld posted an article stating that 406 links to vulnerable unregistered TRENDNet surveillance cameras which could be viewed without even a login had been posted on Pastebin and could be viewed on Google Maps. TrendNet stated that a fix had been released but it is very unlikely that more than a few cameras will have had the upgrade implemented. Numerous private shots from these surveillance cams have been posted on this and other articles, creating what the article described as a “Peeping-Tom Paradise”...

Another 2013 NetworkWorld article stated a vulnerability in Foscam wireless IP cameras (CVE-2013-2560) such that “remote attackers.. (can).. read arbitrary files via a .. (dot dot) in the URI, as demonstrated by discovering (1) web credentials or (2) wifi credentials” without any log stored on the camera. An attacker could “grab videostream, email, FTP, MSN, Wi-Fi credentials” or “host malware or run… botnets, proxies and scanners” or hack other IoT devices on the same network. A tool (getmecamtool) developed by the experts that uncovered the vulnerability automates these attacks.

Conclusion

This document outlines different attacks that have been proposed or successful against home wireless security cameras and other security systems. While jamming wireless security systems is possible, using it to attempt a break in is considered difficult and rare. There are however various vulnerabilities inherent in smart home security cameras and other systems that can be hacked and these have been used for privacy intrusion and DDOS botnets among other attacks.