The Future is Human-Machine Collaboration, Not Automation.

Shourov Bhattacharya
3 min readOct 24, 2017

--

The world is in the middle of a rush to “automate”. From self-driving cars to machine diagnosis to customer service bots, companies are making heavy investments in efforts to remove humans from more and more processes. Meanwhile the press plays upon the fear of a future where millions of jobs are replaced by AI and robots.

Don’t believe the hype. Over the next few posts, we argue the future is not automation, but better human-machine collaboration.

Automated systems work well in controlled environments where the designers’ assumptions are not challenged. But in most real-world scenarios those assumptions fail at certain critical moments. In these moments, the only thing that prevents system failure is human decision making — a process that we are very far from modelling in a machine.

Technology can do what you know how to do. But it can’t do the things you don’t know how to do, but do anyway.

For example — an aeroplane can fly itself 99% of the time. But during a storm or a difficult landing, an experienced pilot is there to give passengers the best chance of survival. The pilot and co-pilot’s decision making is a complicated, embodied process. It’s partly rational, but also includes emotions, muscle memory, social cues and intuition. The pilot-aeroplane system has now evolved as the safest form of transport — a collaborative system that combines machine and human decision making.

In our mapping of business processes, we often find critical gaps in technology systems within large organizations. We also find that those gaps are being “patched” by intelligent, resourceful human minds. These people are not only processing information and applying rules but doing many other things — such as building trust, lessening anxiety or removing anomalies from human inputs.

Cognition and computation work together in a feedback loop.

Consultants and engineers do not notice these people or the things that they do. Their blind spot is the exclusion of human cognition from the scope of information systems.

That’s understandable, in a way. Human minds are variable, unpredictable and opaque. They cannot be modelled easily and it is convenient to exclude them. However, almost all information systems rely on feedback from human decisions, which in turn modifies the state of the entire system in time. Conventional, machine-only system descriptions miss this dynamic feedback loop and are fundamentally incomplete. We observe this as a common cause of failure in large IT projects.

We include people within our system boundaries, and it changes everything. Instead of over-optimizing and over-automating, we think about the hybrid human-machine system as a whole and find the optimal balance between human and machine decision making. In some rare cases that may approach full automation, but more often it is a mix of smart people and “smart” technology.

Our experience gives us other reasons to challenge the conventional wisdom of automation as a strategy, such as the centralization of human errors, concealment of risk and the removal of learning from systems.

But more on that in later posts. In the meantime, we’d love to hear your thoughts.

--

--