In defence of human intelligence (and fallibility) in social work
Published by Professional Social Work magazine, 8 June, 2023
A distinct sense of inevitability surrounds the use of AI, machine learning and predictive analytics in social support systems.
The overriding message is: get on board or be left behind in ‘the 4th industrial revolution’.
AI has proven extremely beneficial in fields such as engineering and aspects of health care where decision-making is quantitative and based on objective rationalities. It can also be harnessed to great effect in the analysis of social data to guide macro-level decision-making, although this comes with risks, too.
But decision-making in social work practice is overwhelmingly an ethical concern, in which risk and rationality are situated and contextual. Social workers are required to respond sensitively and professionally to the messiness and complexity of human life, social interactions and interdependence. Key to all this is social workers’ capacity for empathy.
Numerous articles questioning whether AI can be empathic overwhelmingly conclude that machines could only ever achieve ‘artificial empathy’. Reporting on AI’s mimicry of human feeling has revealed an uncanny, often disturbing, authenticity. But it is a wholly artificial construct that plays on our human tendency to overlay meaning on our observations and interactions.
Our capacity for feeling and perceiving unquantifiable phenomena, and our subjective interpretation of it, is often framed as an undesirable human fallibility by those seeking to posit machines as better, more objective and therefore reliable decision-makers.
In this conception humans make mistakes and commit errors of judgement, the strong implication being that machines don’t make mistakes or, at least, when they do, it is the result of humans feeding them bad data, or of some design flaw, or humans not properly understanding or simply overlooking what a machine is telling them.
In social work, this raises important questions around accountability.
An AI-augmented app for social workers recently came onto the market, developed in the US but available for download in the UK. A key selling point is that it will save social workers time and labour. In this new, untapped and unregulated commercial space, it seems likely that more apps and platforms making similar promises will follow.
If a social worker uses an AI-assisted app or platform to produce, say, a risk assessment on which their professional decision is based, and something goes badly wrong as a consequence, where does the responsibility for that lie? The makers of the app/platform? The employer who sanctioned its use? Whoever provided the information (or didn’t share relevant information) on which the assessment was based?
Ultimately, it is the social worker who is accountable for the professional decisions arising from that assessment.
I predict that AI will be increasingly peddled as a solution to social work’s time constraints and workload crisis. Tools and apps will be developed that produce case notes and assessments from data input by social workers who will then review, revise and refine the outputs of these AI ‘assistants’.
Social workers will be told that there will be gains in quality and efficiency, and in their wellbeing and working conditions. They will be told this will have benefit for the people they support because these happier, healthier and more efficient social workers will be able to spend more time doing useful things with and for those people.
It will be presented as a technocratic utopia in which social workers are finally freed from the shackles of bureaucracy and proceduralism to spend time doing the work they came to the profession to do because they will have machines to fill out all those unnecessary forms and paperwork.
But if the problem is too many forms and too much paperwork, is the answer to get machines to do it for us or to get rid of those forms and paperwork? A key lesson from Covid-19 is that, in times of crisis, we can indeed jettison the bureaucracy that doesn’t serve or protect social workers or the people they support.
However, the management of bureaucracy in social work is a commercial concern. Private businesses provide the digital platforms and systems public bodies use to manage, store and analyse data, and to measure the performance of social workers.
Since the 1990s digital transformation consultants and tech companies have sold us the line that digitisation in social work will save time and allow us to help more people.
Instead, technology has created more bureaucracy and methods of control and constraint on professionalism and taken social workers further away from the people they came into the profession to support. It is reported that social workers now spend around 80 per cent of their time at a computer.
We are in real danger of being sold, or having imposed on us, another technological solution to this technology-created preponderance of bureaucracy, driven by audit culture, business logic and local authority corporate fear arising from high profile child protection 'failings' which are, most often, rooted in wider, systemic problems.
In a time of rising need - due in no small part to a politically-mandated hollowing out of the welfare state, ideologically-driven hostile environments for marginalised and oppressed groups, and catastrophically depleted resources - it is naive to believe that time saved by social workers using AI will simply be given back to them so they can spend time with people, engage in reflective forums and attend to their own wellbeing.
It is far more likely that more work, more assessments will come along to fill those gaps, leading to less time to review and refine the outputs from these AI ‘assistants’.
The implications of this are stark and troubling. How long before the first AI-associated harm occurs in relation to social work? And who will be accountable? Rest assured, “AI told me to do it” will not wash with the regulator or the courts.
In the context of uncritical and often misplaced faith in ‘objective’ rationalities and ‘unbiased’ empiricism, it seems increasingly out of place to say that humans will always make better social work practice-related decisions than machines.
Further, to maintain that this is because not despite of the fact that we are influenced by our emotions and sensibilities, our values and experiential knowledge and wisdom — the things that make up our various and imprecise forms of intelligence and which we bring to bear in navigating the complexity, messiness, ambiguity and situated risk rationalities that characterise social work – is a direct rebuttal of the view that technology should act as a counter to human fallibility in social support systems.
These capacities are not easily quantifiable, and are certainly mutable and contestable. And they are fallible, difficult though that is to countenance in an area such as social work, where that fallibility can and does lead to actual harm, and to social workers being at times unjustifiably blamed for intervening too much, or for not doing enough.
But it is this fallibility that also undergirds our capacity for empathy, and it is because of this fallibility that we must be professionally accountable. Such checks and balances are not a bad thing, at least in principle, even if, as is too often the case, these accountability mechanisms are used against us, to apportion blame and responsibility where these arguably lie elsewhere, or at least should be shared across the relevant systems.
In social work, understandably, we have a horror of making mistakes. But, in order to learn from our mistakes, we need to own them. And where mistakes are made elsewhere, this needs to be made clear so that social workers are not unfairly castigated, which helps no-one.
Currently, there is no way to hold AI or its makers to account, hence its appeal to those who seek to use it for gaining power and wealth. And, in the realm of social work, it is social workers who will bear the responsibility for bad machine-led decisions. The people we seek to support will suffer the consequences of AI and AI-assisted mistakes.
To those who embrace the oncoming proliferation of AI in our professional and personal lives, this may appear bleak and pessimistic. But AI stands to blur the lines between human and machine decision-making in ways not suitable to ethical and accountable social work, where fine balances between rights and risks, protection and empowerment, and, often, compulsion and autonomy, are the stuff of daily practice.
This requires empathic, contextually-nuanced, values-driven, collaborative, relational and, ultimately, human decision-making.
Will AI help us achieve that aim, or threaten to undermine it?
Christian Kerr is a lecturer in social work and social care at Leeds Beckett University
'There is hope in honest error. None in the icy perfections of the mere stylist.'Charles Rennie Mackintosh