I was nervous as I stood up in front of the audience of scientists and engineers. I’d been invited to give a keynote at this conference, but I was not sure whether many of them would be interested in my topic: generative artificial intelligence and the future of work. It seemed to have snowed indoors that night; and for many, it was clear that their vibrant working days were over. Moreover, I was following on from two experts in the field: a Professor in Computer Graphics in a New Zealand university and a theologian on the Board of the Global Network for Digital Theology. What could I offer? Particularly as the professor had already given the oft-quoted an ominous statistic that AI’s impact will wipe out 50% of jobs.

I started with a word of caution. In 1930 English economist and philosopher John Maynard Keynes predicted that the creation of labor-saving devices would mean that by 2030 we would only be working 15 hours a week. That is still unlikely for most of us who work; and definitely impossible to imagine for the vast majority of people in the world. So, predictions of loss of jobs or reduced work hours have frequently been wrong in the past.

Nevertheless, we are likely to see big shifts in jobs and employment as the AI revolution gathers pace. I shared the way that I had asked ChatGPT about the benefits and losses of AI for work and the response was generally a positive spin on increased productivity and removing the drudgery of repetitive jobs.

Thus, what ChatGPT and most conversations around work reflect are the following assumptions: first, that there is no good in work, almost as if work is a process of un-creation. Second, that work is a bad thing from which we need to be saved, that work results from, or is even a punishment for sin. Third, that technology can save us both from the drudgery of work and the mess we have created, a techno-optimist salvation story. And finally, that paradise is a place where humans don’t have to work anymore; this is the heaven that people desire. These are commonly held beliefs, and therefore are the biases being implicitly integrated into AI.

However, the Bible has a much better story for our work: first, work is good (Genesis 1:26–28, 2:15); it existed before sin entered the world. Humans and our work are part of the unfolding of the potential of creation. Second, the process of working has been impacted by sin. It is the ground that was cursed in Genesis 3:17, and this is why we experience work as difficult, tiring, and not as productive as we hope. Third, Jesus’ entry into the world has opened new possibilities for our work and workplaces to be part of the process of redemption of all things in heaven and on earth (Colossians 1:15–20). We can rediscover our role as image-bearers of God, building for his kingdom on earth, as it is in heaven. Finally, that there will be work in the New Creation, and we will be working in perfect relationship with God, creation and each other (Revelation 21:24).

“It would be interesting if these assumptions were fed into AI; what would change?” I wondered out loud to the audience. I then shared what makes the AI revolution different from the industrial revolution is the breadth of jobs impacted.

I asked the audience about common characteristics of the jobs that would be impacted. Most picked up that AI will take over jobs that involve repetitive or routine tasks; or rule-based work. AI is excellent at identifying patterns and managing multiple processes.

Manufacturing and warehousing jobs will go. Many have seen the Amazon warehouse with robots doing the bulk of the selecting and packing at dizzying speeds. Agricultural jobs are increasingly automated. It is quite stunning to see the advances in robotic technology in agriculture, as our demand for cheap food drives that revolution. We are also familiar with the effect on transportation jobs. In Sydney, Australia, we have quietly accepted driverless Metro trains as the new normal. Imagine what it will be like to have driverless trucks crisscrossing the countryside.

We can also imagine the impact on retail jobs. Already we know that we can order a product online and every step of the process can be managed without any human intervention… even our complaints! Perhaps people will not be upset that telemarketing jobs might go. AI has improved so markedly in mimicking human characteristic and emotions that telemarketing could be almost entirely delivered by chatbots. (Yet, for some reason, those used by my bank still cannot understand me!)

Administration jobs will also disappear. Just as computers were the death of secretaries and typing pools, we will see a much broader range of administrative tasks being completed by AI. One of my friends relies on AI to respond to most of her business emails, to program routine tasks, and to summarize phone call messages. AI has also taken over many routine financial services, and already there is at least preliminary AI assessment of documentation for bank loan approvals, and insurance claims.

Healthcare jobs also are under threat. We have accepted robotic help in surgical procedures, but there is a lot of healthcare administration that can be done by AI. And legal jobs under threat. Much legal research can be done by AI, and there is even talk of preliminary assessment of cases allowing magistrates and judges to make the final decision.

Journalism jobs are under threat, and we may well be shocked at how much of the news we consume has been compiled and summarized by AI. Increasingly, humans are being primarily used for opinion pieces, and more long-form writing.

In sum, we are seeing that AI will impact not only entry-level jobs, but professions that require university degrees. In fact, AI will affect an extraordinary breadth of industries.

The audience leaned in as I promised to identify how to future-proof your career from AI, and identified the following sorts of jobs. In healthcare direct patient care jobs still have a future. People still yearn for human contact, and patient care requires a degree of judgment and physical touch which is difficult to simulate. Those same sort of skills are needed for social work, where direct personal care, including diagnosis of complex conditions, still requires human intervention. Education jobs are similarly safe, in the short term. Many point to a future when AI will enable individually tailored educational programs for children to reach their potential; but there is still a level of empathic connection and holistic diagnosis which is beyond AI. Though perhaps on shaky ground, even hospitality jobs will still be required for now. While I have enjoyed the novelty of having my yum cha delivered by a robot, there is still a level of empathic engagement and creative finesse which is part of the idea of hospitality, that process of making strangers feel like friends.

Skilled trades jobs are safe. Yes, we can 3D-print houses, but we still need humans to maintain them: matching paint, fixing broken doors and windows, replacing cracked tiles.

Leadership roles also will still be required. Humans are complex and unpredictable, and it takes a human leader to skillfully to build a team and motivate them to complete work to quality, and on time. Emergency services skills will also be needed into the foreseeable future. We will still need humans to turn up to a fire, police or medical emergency, to assess what is happening, and rapidly determine best next steps. Robocops are still a way off.

There is also an ongoing need for creatives. Although AI art has won significant art and photographic competitions, it will always be derivative, and depends on human original creation. Longer term security is guaranteed for most not-for-profit roles, since that work is complex, often crossing cultural boundaries, and dealing with wicked problems, by definition not patterned or rule-based. Finally, religious jobs are safe, since they involve a degree of moral reasoning, empathic encounter and meaning-making which is beyond AI. AI Jesus may be biblically literate, but it is difficult to relate to him in any significant way.

Again, I asked the audience to work out the common characteristic of these more human-oriented roles and they affirmed the following ideas. Humans are superior to AI in showing and interpreting empathy, having emotional intelligence, and being able to motivate others.

Humans can exercise imagination and create original work. We can also discern holistically and use wisdom for complex problem solving. It takes a human to make a judgment, weighing up multiple factors, nuanced for the individual and their story. The ability to develop genuine interpersonal relationships is human, also to adapt to changing situations, and to resolve conflict. There is also comfort in physical presence. It is also a human characteristic to have strategic vision for a group or organization, including the ability to manage ambiguous situations and people. Finally, we are able to understand complex moral concerns, apply ethical reasoning, and respond to intercultural complexity.

A member of the audience pointed out that all these “human job characteristics” are things that humans generally don’t actually do very well. In response, I noted that since human knowledge feeds into AI, that might be the reason why AI can’t do those things well!

While these are the sorts of things that Narrow AI cannot replicate, it does seem to be catching up. I pointed out that I have been impressed with how well recent AI software can reproduce humor, apparent empathy, and provide increasingly sophisticated answers to complex questions.

So, what is the difference between those AI pieces and human work? Perhaps it is something as complex and beautiful and true and hard to identify as soul?

With that enigmatic comment hanging over the audience I moved on to the economic consequences of AI, which include the obvious consequence of increased productivity. Some of this is reduced employment costs, just the flip side of the employment concerns we had been discussing. However, the real gamechanger is that a human and AI together can lift the productivity of even the least competent employee to the level of the best-performing employee.

This would be wonderful if we shared in the wealth from the productivity gains, however, the current reality and the strong likelihood is that there will be no common sharing of benefits. If so—while acknowledging my earlier caution about predicting job loss—we can expect large employment shifts and probably growing wage polarization. There are already job losses happening in some parts of the world; especially in the US. However, what is becoming increasingly evident is the polarization of wages. We have rapidly growing inequality in the world. The poorest 50% of the population consistently lags behind the top 10% of the population in every region and AI is likely to accelerate that trend.

The power of corporations over governments is likely to be accentuated. We have typically relied on governments to regulate or control rampant corporations, but governmental interventions are becoming increasingly ineffective, as corporations cross international boundaries and have balance sheets bigger than all but the wealthiest countries.

There is also likely to be increased global inequality. There is already a gulf between the Global North and the Global South; but that is going to be increased due to access to AI technologies. There is a growing digital divide.

Some of this might seem too future-oriented and conceptual, so I offered to the audience an example that was more relatable. I told them of a recent job offer where my contract specified that my employer “may carry out workplace surveillance on devices via software installed on computers and systems operated by the entity, in accordance with its Workplace Surveillance Policy. This monitoring includes use of email, internet, and all systems usage.” This is an increasingly common requirement in work contracts, and it is complicated.

While it is undeniable that work surveillance leads to increased productivity, cost savings, enhanced digital security, and provides an objective measure of performance evaluation; we have to weigh that up against the increasingly evident negative impacts. Privacy concerns have always been recognized. How far does the software go in analyzing things I do on my computer which are not related to my job? What is it watching through my camera? What meta data is being captured through my keystrokes? Any time one is conscious of being watched, there is an erosion of trust between the watcher and the employee. Why does my employer need to watch me? Can’t they trust me to deliver the work that I have promised to do?

There are also concerns regarding ethics and bias. AI might not take into consideration gender or religious practices; and unions have demonstrated that biased assumptions are programmed in. There is a potential for work surveillance to go beyond checking work and to become about micro-managing employees. Such concerns are already leading to increasing reports of stress, anxiety and mental strain from the process of being so closely watched at work; leading to distrust within the workplace and even paranoia. There have also been unforeseen consequences, such as reducing the amount of social connectivity which is part of the oxygen that sustains an organization. Such incidental social collaboration often fuels creativity and innovation within an organization.

As I shared this real-life current scenario, I could feel the energy in the room drop. People were feeling depressed.

However, behind every existential threat is an opportunity, and the church can step up here. We need to teach a theology of work including the dignity of all work, and the meaning and purpose of work. We also need to breakdown the sacred–secular divide so that Christians can be more vocal for human-centered decisions at work.

This is to help advocate for human-centered AI, to ensure that the conversation about work is not just about efficiency and productivity but also includes recognition of the need for dignity, meaning, purpose and relationships. It’s best to prioritize AI-human collaboration rather than human replacement. In this way the church can be part of the movement to promote ethical AI development and use, addressing concerns about ethics, transparency, bias, and fairness. This is essential to protect those who are vulnerable in society and to encourage corporations and government to ensure AI systems are aligned for the common good. Finally, churches should be at the forefront of advocating for global AI justice, seeking to bridge the digital divide, and for using AI positively to advance health and economic support in the Global South.

Churches can also help practically, in both service and outreach. Just as happened during the global financial crisis, churches can offer reskilling, education, and resources to educate people about the future impact of AI on job sectors. Churches should also lead the move to provide emotional, social, and spiritual support for those impacted by AI, as well as building resilient communities with strong relationships. Such communities would be a base to promote ethical AI consumerism, supporting organizations that value human labor and ethical AI practices (like Australia’s ethical fashion guide which has changed organizational behavior). Churches could gather experts in AI, theology and ethics to promote best practices.

As I finished these lists, I pointed out the delicious irony that most of these suggestions were generated by AI! I had collaborated with ChatGPT to do the research, and I kept asking questions beyond the pat and obvious responses—and the overly positive—to something that felt more true, and good.

Maybe what surprised me most, though, was the response to my presentation: people were genuinely interested and engaged with the material wanting to push further, tell stories, check sources, and ask questions. Energy began to return to the room. We are not doomed to AI destroying our world, though, as with any technology, it does create peril. Whatever your age, whatever your initial response to AI—optimist or doomsayer—we are all intrigued as AI forces us to question what it means to be human. There is good thinking—even good work—to be done as we ponder how AI can become part of our world in a good way, not a destructive one.

Kara Martin is the author of Workship: How to Use your Work to Worship God, and Workship 2: How to Flourish at Work; and co-author of Keeping Faith: How Christian organisations can stay true to the way of Jesus. She is Adjunct Professor with Gordon-Conwell Theological Seminary, Boston and a lecturer with Mary Andrews College, Sydney. Kara is also a Visiting Fellow with the Mockler Center for Faith and Ethics in the Public Square and on the Board of the Theology of Work Project in the US. She is the 2024 winner of the Australian Faith & Work Award (presented by Ethos/Evangelical Alliance).

Meet Kara